**Signals and Communication Technology**

# Ying-Chang Liang

# Dynamic Spectrum Management

From Cognitive Radio to Blockchain and Artificial Intelligence

# Signals and Communication Technology

#### Series Editors

Emre Celebi, Department of Computer Science, University of Central Arkansas, Conway, AR, USA Jingdong Chen, Northwestern Polytechnical University, Xi'an, China E. S. Gopi, Department of Electronics and Communication Engineering, National Institute of Technology, Tiruchirappalli, Tamil Nadu, India Amy Neustein, Linguistic Technology Systems, Fort Lee, NJ, USA H. Vincent Poor, Department of Electrical Engineering, Princeton University, Princeton, NJ, USA

This series is devoted to fundamentals and applications of modern methods of signal processing and cutting-edge communication technologies. The main topics are information and signal theory, acoustical signal processing, image processing and multimedia systems, mobile and wireless communications, and computer and communication networks. Volumes in the series address researchers in academia and industrial R&D departments. The series is application-oriented. The level of presentation of each individual volume, however, depends on the subject and can range from practical to scientific.

"Signals and Communication Technology" is indexed by Scopus.

More information about this series at http://www.springer.com/series/4748

Ying-Chang Liang

# Dynamic Spectrum Management

From Cognitive Radio to Blockchain and Artificial Intelligence

Ying-Chang Liang Innovation Center University of Electronic Science and Technology of China Chengdu, Sichuan, China

ISSN 1860-4862 ISSN 1860-4870 (electronic) Signals and Communication Technology ISBN 978-981-15-0775-5 ISBN 978-981-15-0776-2 (eBook) https://doi.org/10.1007/978-981-15-0776-2

© The Editor(s) (if applicable) and The Author(s) 2020. This book is an open access publication. Open Access This book is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

### Preface

Radio spectrum, as an indispensable enabler of wireless communications, is becoming a severely scarce resource with the explosive growth of wireless traffic and massive connections of devices. Because of the spectrum scarcity problem, millimeter-wave band and Tera-Hertz band are being explored for cellular mobile communications. However, measurements have shown that the radio spectrum is experiencing underutilization due to the adoption of static and exclusive spectrum allocation method. It is expected that the spectrum allocation policy will be evolved from the fixed manner to dynamic spectrum management (DSM), in order to make full use of the radio spectrum.

The success of DSM, however, attributes not only to the availability of technical methodologies, but also to the support from the spectrum policy. Cognitive radio (CR) is the state-of-the-art enabling technique for DSM. With CR, an unlicensed/ secondary user is able to opportunistically or concurrently access spectrum bands owned by the licensed/primary users. On the other hand, blockchain, as an essentially open and distributed ledger, incentivizes the formulation and secures the execution of the policies for DSM. Finally, artificial intelligence (AI) techniques help the users observe and interact with the dynamic radio environment, thereby improving the efficiency and robustness of CR and blockchain for DSM.

This book provides a systematic overview of the above three technologies for DSM, and reviews several communication systems that use DSM. It is intended for a broad range of readers, including the students and the researchers in wireless communications, as well as the radio spectrum policymakers. We hope the concepts, theories and methodologies presented in this book could offer useful references and guidance to the readers.

Chengdu, China Ying-Chang Liang

# Acknowledgements

This book was supported by the National Natural Science Foundation of China under Grant 61631005, Grant U1801261, and Grant 61571100.

The author would like to thank Yonghong Zeng, Yiyang Pei, Shiying Han, Qiao Xu, Shisheng Hu and Yang Cao for their contributions and supports in preparing this book. He is grateful to the University of Electronic Science and Technology of China (UESTC), China and Institute for Infocomm Research, Singapore, for providing him an excellent research environment to conduct most of the work described in this book.

# Contents






# Acronyms





# **Chapter 1 Introduction**

**Abstract** Facing the increasing demand of radio spectrum to support the emerging wireless services with heavy traffic, massive connections and various quality-ofservices (QoS) requirements, the management of spectrum becomes unprecedentedly challenging nowadays. Given that the traditional fixed spectrum allocation policy leads to an inefficient usage of spectrum, the dynamic spectrum management (DSM) is proposed as a promising way to mitigate the spectrum scarcity problem. This chapter provides an introduction of DSM by firstly discussing its background, then presenting the two popular models: the opportunistic spectrum access (OSA) model and the concurrent spectrum access (CSA) model. Three main enabling techniques for DSM, including the cognitive radio (CR), the blockchain and the artificial intelligence (AI) are briefly introduced.

#### **1.1 Background**

Radio spectrum is a natural but limited resource that enables wireless communications. The access to the radio spectrum is under the regulation of government agencies, such as the Federal Communications Commission (FCC) in the United States (US), the Office of Communications (Ofcom) in the United Kingdom (UK), and the Infocomm Development Authority (IDA) in Singapore. Conventionally, the regulatory authorities adopt the *fixed spectrum access* (FSA) policy to allocate different parts of the radio spectrum with certain bandwidth to different services. In Singapore, for example, the 1805–1880MHz band is allocated to GSM-1800, and it cannot be accessed by other services at any time. With such static and exclusive spectrum allocation policy, only the authorized users, also known as licensed users, have the right to utilize the assigned spectrum, and the other users are forbidden from accessing the spectrum, no matter whether the assigned spectrum is busy or not. Although the FSA can successfully avoid interference among different applications and services, it quickly exhausts the radio resource with the proliferation of new services and networks, resulting in the spectrum scarcity problem.

The statistics of spectrum allocation around the world show that the radio spectrum has been almost fully allocated, and the available spectrum for deploying new services is quite limited. The emerging of massive connections of internet-of-things (IoT) devices accelerates the crisis of spectrum scarcity. According to the study in [1], around 76 GHz spectrum resource is needed for accommodating billions of end devices by exclusive occupying the spectrum. Nevertheless, extensive measurements conducted worldwide such as US [2], Singapore [3], Germany [4], New Zealand [5] and China [6], have revealed that large portions of the allocated radio spectrum are underutilized. For instance, in US, the average occupancy over 0–3 GHz radio spectrum at Chicago is 17.4%. This number is even as low as approximately 1% at West Virginia. In Singapore, the average occupancy over 80–5850MHz band is less than 5%. These findings reveal that the inflexible spectrum allocation policy leads to an inefficient utilization of radio spectrum, and strongly contributes to the spectrum scarcity problem even more than the physical shortage of the radio spectrum.

#### **1.2 Dynamic Spectrum Management**

The contradiction between the scarcity of the available spectrum and the underutilization of the allocated spectrum necessitates a paradigm shift from the inefficient FSA to the flexible and high-efficient spectrum access. In this context, *dynamic spectrum management* (DSM) has been proposed and recognized as an effective approach to mitigate the spectrum scarcity problem. It has been foreseen that by using DSM, the spectrum requirement for deploying the billions of internet-of-things (IoT) devices can be sharply reduced from 76 to 19 GHz [7]. In DSM, the users without license, also known as secondary users (SUs), can access the spectrum of authorized users, also known as primary users (PUs), if the primary spectrum is idle, or can even share the primary spectrum provided that the services of the PUs can be properly protected. By doing so, the SUs are able to gain transmission opportunity without requiring dedicated spectrum. This spectrum access policy is known as *dynamic spectrum access* (DSA). According to the way of coexistence between PUs and SUs, there are two basic DSA models: (1) The *opportunistic spectrum access* (OSA) model and (2) the *concurrent spectrum access* (CSA) model.

#### *1.2.1 Opportunistic Spectrum Access*

A spectrum usage in the OSA model is illustrated in Fig. 1.1. Due to the sporadic nature of the PU transmission, there are time slots, frequency bands or spatial directions at which the PU is inactive. The frequency bands on which the PUs are inactive are referred to as *spectrum holes*. Once one or multiple spectrum holes are detected, the SUs can temporarily access the primary spectrum without interfering the PUs by configuring their carrier frequency, bandwidth and modulation scheme to transmit on the spectrum holes. When the PUs become active, the SUs have to cease their transmission and vacate from the current spectrum. To enable the operation of the OSA,

the SU needs to obtain the accurate information of spectrum holes, so that the quality of services (QoS) of the PUs can be protected. Two factors determine the method that the SU can adopt to detect the spectrum holes. One factor is the predictability of the PU's presence and absence, and the other factor is whether the primary system can actively provide the information of the spectrum usage. Accordingly, there are basically two methods which can be adopted by SUs to detect spectrum holes.

#### (1) *Geolocation Database*

If the PU's activity is regular and highly predictable, the geographical and temporal usage of spectrum can be recorded in a geolocation database to provide the accurate status of the primary spectrum. For accessing the primary spectrum without interfering the PUs, an SU firstly obtains its own geographic coordinates by its available positioning system, and then checks the geolocation database for a list of bands on which the PUs are inactive in the SU's location. The geolocation database approach is suitable for the case when the PUs' presence and absence are highly predictable, and the spectrum usage information can be publicized for achieving a highly efficient utilization of spectrum [8–10]. For example, in the final rules set by the FCC for unlicensed access over TV bands, the geolocation database is the only approach that is adopted by unlicensed devices for protecting the incumbent TV broadcasting services [11]. Nevertheless, for other services, such as cellular communications, the activities of users are difficult to be predicted and there is lack of incentive for the PUs to provide their spectrum usage, especially when the primary and secondary services belong to different operators. In this case, the geolocation database is inapplicable.

#### (2) *Spectrum Sensing*

Without a geolocation database, an SU can carry out *spectrum sensing* periodically or consistently to monitor the primary spectrum and detect the spectrum holes. When there are multiple SUs, cooperative spectrum sensing can be applied to improve the sensing accuracy [12–17]. Different from the previous method where the accurate spectrum usage information is recorded in the geolocation database, the spectrum sensing is essentially a signal detection technique, which could be imperfect due to the presence of noise and channel impairment, such as small-scale fading and largescale shadowing [18]. To measure the performance of spectrum sensing, two main metrics, i.e., the probability of detection and the probability of false alarm, are used. The former one is the probability of detecting the PU as being present when the PU is active indeed. Thus, it describes the degree of protection to the PUs, i.e., a higher probability of detection provides a better protection to the PUs. The latter one is the probability of detecting the PU as being present when the PU is actually inactive. Therefore, it can be regarded as an indication of exploration to the spectrum access potential. A lower probability of false alarm indicates more transmission opportunities can be utilized by the SUs, and thus better SU performance, such as throughput, can be achieved. To this end, a good design of spectrum sensing should have a high probability of detection but low probability of false alarm. However, these two metrics are generally conflicting with each other. Given a spectrum sensing scheme, the improvement of probability of detection is achieved at the expense of increasing probability of false alarm, which leads to less spectrum access opportunities for the SUs. In another word, a better protection to the PUs is at the expense of the degradation of the SU's performance. To improve the performance of spectrum sensing, there has been a lot of work on designing different detection schemes or allowing multiple SUs to cooperatively perform spectrum sensing [12–17].

It is worth noting that the spectrum sensing is an essential tool for enabling DSA and deserves continued development from the research communities. Although spectrum sensing is not mandatory for the unlicensed access over TV white space, the current standards developed such as IEEE 802.22 and ECMA 392 still use a combination of geolocation database and spectrum sensing [19–21]. In some literatures, the OSA model is also referred to as spectrum overlay [22], or interweave paradigm [23].

#### *1.2.2 Concurrent Spectrum Access*

A typical CSA model is shown in Fig. 1.2, where the SU and the PU are transmitting on the same primary spectrum concurrently. In this type of DSA, the secondary transmitter (SU-Tx) inevitably produces interference to the primary receiver (PU-Rx). Thus, to enable the operation of the CSA, the SU-Tx needs to predict the interference level at the PU-Rx caused by its own transmission, and limit the interference to an acceptable level for the purpose of protecting the PU service. In practice, a communication system is usually designed to be able to tolerate a certain amount of interference. For example, a user in a code-division multiple access (CDMA) based third-generation (3G) cellular network can tolerate interference from other users and compensate the degradation of signal-to-interference-plus-noise ratio (SINR) via the embedded inner-loop power control. Such level of tolerable interference is known as *interference temperature*, which is also referred to, in some literatures, as interference margin. The concurrent transmission of SU-Tx is allowed only when the interference received by the PU-Tx is no larger than the interference temperature. Therefore, different from the OSA model where the geolocation database or spectrum sensing is used for detecting spectrum holes, interference control is critical for CSA to protect the PU services.

The protection of the PU is mathematically formulated as an interference power constraint. A basic interference power constraint indicates that the instantaneous interference power received by PU-Rx is no larger than the interference temperature. Such a formulation requires that the SU-Tx has the information of the interference temperature provided by PU-Rx and the channel state information (CSI) from SU-Tx to PU-Rx, also known as cross channel state information (C-CSI), to quantify the actual interference received by the PU-Rx. Variants of the basic interference power constraint result in different performance of the secondary system. For example, the average interference power constraint gives better secondary throughput than the peak interference power constraint [24, 25]. This is because that the former constraint is less stringent, and in some fading states it allows the interference exceed the interference temperature. Furthermore, if there are multiple SUs, the secondary system can exploit the multiuser diversity (MUD) to improve secondary capacity by choosing the SU with best receive quality and least interference to the PU-Rx to be active for transmitting or receiving. The MUD of sharing a single frequency band was carefully studied in [26–29]. To benefit from the interference diversity or MUD, the CSI from SU-Tx to SU-Rx and SU-Tx to PU-Rx should be known by the SU-Tx.

Exploiting the primary system information can offer more sharing opportunities. In [30], rate loss constraint was proposed to restrict the performance degradation of the PU due to the secondary transmission. To formulate this constraint, not only the C-CSI, but also the CSI from PU-Tx to PU-Rx and the transmit power of PU-Tx are required [31]. Without direct cooperation between the primary and secondary systems, the SU-Tx can trigger the power adaptation of primary system by intentionally sending the probing signal with a high power [32, 33]. To tackle the strong interference, PU-Tx will increase its transmit power which can be heard by the SU-Rx. Then, the secondary system deduces the interference temperature provided by the primary system and estimates the C-CSI, which is the critical information for secondary system to successfully share the primary spectrum.

It is worth noting that, similar to OSA, the protection to the primary system and the secondary throughput are contradictory with each other. A stringent protection requirement of the primary system leads to a low secondary throughput. Therefore, making good use of the interference temperature is the way to optimize the performance of the secondary system in the CSA model. In some literatures, the CSA model is also referred to as spectrum underlay [33].

The comparison of OSA and CSA is summarized in Table 1.1. It can be seen that when the PU is off, the SU can transmit with its maximum power based on OSA model. However, when the PU is on, the SU can still transmit by regulating its transmit power based on the CSA model, rather than keep silent according to the OSA model. Such a hybrid spectrum access model combines the benefits of OSA and CSA, which gains higher spectrum utilization. Moreover, in the aforementioned OSA and CSA models, the PUs have higher priority than the SUs, and thus should be protected. Such a DSA is also known as hierarchical access model, since the priorities of accessing the spectrum are different based on whether the users are licensed or not. In the hierarchical access model, since the PUs are usually legacy users, the cooperation between the primary and secondary systems are unavailable, and only the SUs are responsible to carry out spectrum detection or interference control. In some cases, the primary system is willing to lease its temporarily unused spectrum to the SUs by receiving the leasing fee, which is an incentive for the PUs to provide certain form of cooperation. In the literatures, the DSA model in which all of the users have equal priorities to access the spectrum has also received lots of attention, such as license-shared access (LSA) and spectrum sharing in unlicensed band [34– 36]. Although there is no cap of interference introduced to the others, in this DSA model, each user has to take the responsibility to protect the others or keep fairness in accessing the spectrum.


**Table 1.1** Comparison of OSA and CSA

#### **1.3 Cognitive Radio for Dynamic Spectrum Management**

*Cognitive radio* (CR) has been widely recognized as the key technology to enable DSA. A CR refers to an intelligent radio system that can dynamically and autonomously adapt its transmission strategies, including carrier frequency, bandwidth, transmit power, antenna beam or modulation scheme, etc., based on the interaction with the surrounding environment and its awareness of its internal states (e.g., hardware and software architectures, spectrum use policy, user needs, etc.) to achieve the best performance. Such reconfiguration capability is realized by software-defined radio (SDR) processor with which the transmission strategies is adjusted by computer software. Moreover, CR is also built with cognition which allows it to observe the environment through sensing, to analyse and process the observed information through learning, and to decide the best transmission strategy through reasoning. Although most of the existing CR researches to date have been focusing on the exploration and realization of cognitive capability to facilitate the DSA, the very recent research has been done to explore more potential inherent in the CR technology by artificial intelligence (AI).

A typical cognitive cycle for a CR is shown in Fig. 1.3. An SU with CR capability is required to periodically or consistently observe the environment to obtain the information such as spectrum holes in OSA or interference temperature and C-CSI in CSA. Based on the collected information, it determines the best operational parameters to optimize its own performance subject to the protection to the PUs and then reconfigures its system accordingly. The information collected over time can also be used to analyse the radio environment, such as the traffic statistics and channel fading statistics, so that the CR device can learn to perform better in future dynamic adaptation.

**Fig. 1.3** The cognitive cycle for CR

Although enabling DSA with CR is a technical issue which involves multidisciplinary efforts from various research communities, such as signal processing, information theory, communications, computer networking, and machine learning [37], its realization also largely depends on the willingness of regulators to open the spectrum for unlicensed access. Fortunately, over the past decades, we have seen worldwide efforts from regulatory bodies on eliminating regulatory barriers to facilitate DSA. For example, in the US, the FCC set forth a few proposals for removing unnecessary regulations that inhibit the development of secondary spectrum markets in November, 2000 [38]. Later, in December 2003, the FCC recognized the importance of CR and promoted the use of it for improving spectrum utilization [39]. In May 2004, the FCC issued a notice of proposed rulemaking (NPRM) that proposes to allow the unlicensed devices (both fixed and personal/portable) to reuse the temporarily unused spectrum of TV channels, i.e., TV white space [40], and the rules for such unlicensed use were finalized in September 2010 [11]. In the national broadband plan released in March 2010, the FCC also indicated its intention to enable more flexible access of spectrum for unlicensed and opportunistic uses [41]. The TV white space is considered to be very promising for a wide range of potential applications due to its favorable propagation characteristics [11], and hence it has also drawn attention from other regulators worldwide. For example, in the UK, the Ofcom proposed to allow licence-exempt CR devices to operate over the spectrum freed up due to analog to digital TV switchover in the statement of the Digital Dividend Review Project released in December, 2007 [42]. In Singapore, the IDA has also recognized the potential of TV white space technology and conducted trials for testing the feasibility and developing regulatory framework to facilitate it [43].

Besides regulators' efforts on spectrum "deregulation", various standardization communities have also been actively working on developing industrial standards that expedite the commercialization of CR-based applications. Following the FCC's NPRM in May 2004, the IEEE 802.22 working group was formed in November 2004 that aims to develop the first international standard that utilizes TV white space based on CR [44, 45]. The standard specifies an air interface (both physical (PHY) layer and medium access control (MAC) layer) for a wireless regional area network (WRAN), which is designed to provide wireless broadband access for rural or suburban areas for licensed-exempt fixed devices through secondary opportunistic access over the VHF/UHF TV broadcast bands between 54 and 862MHz. The finalized version has been published in July, 2011 [20]. The first international CR standard on the use of personal/portable devices over TV White Spaces is ECMA 392 [46]. The first edition of the standard was finalized in December, 2009 by ECMA International based on the draft specification contributed from cognitive networking alliance (CogNeA) [46]. It specifies an air interface as well as a MUX sublayer for higher layer protocols [21], which is targeted for in-home, in-building and neighborhood-area applications in urban areas [46]. Other standards based on CR include IEEE 802.11af, IEEE 802.19, IEEE SCC 41 (previously known as IEEE 1900), as well as the Third Generation Partnership Project (3GPP) LTE Release 13 which introduces the licensed assisted access (LAA) to utilize the 5 GHz unlicensed bands for the operation of LTE [9, 47–49].

#### **1.4 Blockchain for Dynamic Spectrum Management**

The past decade has witnessed the burst of blockchain, which is essentially an open and distributed ledger. Cryptocurrency, notably represented by bitcoin [50], is one of the most successful applications of blockchain. The price for one bitcoin started at \$0.30 in the beginning of 2011 and reached to the peak at \$19,783.06 on 17 December 2017, which reveals the optimism of the financial industry in it. Facebook, one of the world biggest technology company, announced its own cryptocurrency project Libra in June 2019. Besides the cryptocurrency, with its salient characteristics, blockchain has many uses including financial services, smart contracts, and IoT. According to a report from Tractica, a market intelligence firm, enterprise blockchain revenue will reach \$19.9 Billion by 2025 [51]. Moreover, blockchain is believed to bring new opportunities to improve the efficiency and to reduce the cost in the dynamic spectrum management.

Blockchain is essentially an open and distributed ledger, in which transactions are securely recorded in blocks. In the current block, a unique pointer determined by transactions in the previous block is recorded. In this way, blocks are chained chronologically and tamper-evident, i.e., tampering any transaction stored in a previous block can be detected efficiently. The transactions initiated by one node are broadcast to other nodes, and a consensus algorithm is used to decide which node is authorized to validate the new block by appending it to the blockchain. With the decentralized validation and record mechanism, blockchain becomes transparent, verifiable and robust against single point of failures. Based on the level of decentralization, blockchain can be categorized into public blockchain, private blockchain and consortium blockchain. A public blockchain can be verified and accessed by all nodes in the network, while a private blockchain or a consortium blockchain can only be maintained by the permissioned nodes.

A smart contract, supported by the blockchain technology, is a self-executable contract with its clauses being transformed to programming scripts and stored in a transaction. When such a transaction is stored in the blockchain, the smart contract is allocated with a unique address, through which nodes in the network can access and interact with it. A smart contract can be triggered when the pre-defined conditions are satisfied or when nodes send transactions to its address. Once triggered, the smart contract will be executed in a prescribed and deterministic manner. Specifically, the same input, i.e., the transaction sent to the smart contract, will derive the same output. Using a smart contract, dispute between the nodes about transactions is eliminated since that node can identify its execution outcome of the smart contract by accessing it.

Blockchain has been investigated to support various applications of IoT. As a decentralization ledger, blockchain can help integrate the heterogeneous IoT devices and securely store the massive data produced by them. For instance, with data such as how, where and when the different processes of production are completed being immutably recorded in a blockchain and traceable to consumers, the quality of products can be guaranteed. Other uses of blockchain in IoT include smart manufacturing, smart grid and healthcare [52]. Moreover, blockchain is also applied to manage the mobile edge computing resources to support the IoT devices with limited computation capacity [53].

Recently, telecommunication regulator bodies have paid much attention to apply blockchain technology to improve the quality of services, such as telephone number management and spectrum management. In the UK, Ofcom initiated a project to explore how blockchain can be used to manage telephone numbers [54]. Specifically, a decentralized database could be established using the blockchain technology, to improve the customer experience when moving a number between the service providers, to reduce the regulatory costs and to prevent the nuisance calls and fraud. On the other hand, blockchain is also believed to bring new opportunities to spectrum management. According to a recent speech by FCC commissioner Jessica Rosenworcel, blockchain might be used to monitor and manage the spectrum resources to reduce the administration cost and speed the process of spectrum auction [55]. It is also stated that with the transparency of blockchain, the real-time spectrum usage recorded in it can be accessible by any interested user. Thus, the spectrum utilization efficiency can be further improved by dynamically allocating the spectrum bands according to the dynamic demands submitted by users using blockchain.

Researchers have been investigating the application of blockchain in spectrum management. In [56], applications of blockchain to spectrum management are discussed by pairing different modes of spectrum sharing with different types of blockchains. In [57], authors provide the benefits of applying blockchain to the Citizens Broadband Radio Service (CBRS) spectrum sharing scheme. In [58], dynamic spectrum access enabled by spectrum auctions is secured by the use of blockchain. In [59], smart contract supported by blockchain is used to intermediate the spectrum sensing service, provided by sensors, to the secondary users for opportunistic spectrum access. In [60], dynamic spectrum access is enabled by the combination of cryptocurrency which is supported by blockchain, and auction mechanism, to provide an effective incentive mechanism for cooperative sensing and a fair method to allocate the collaboratively obtained spectrum access opportunity.

Essentially, there is a need to derive some basic principles to investigate why and how applying blockchain to the dynamic spectrum management can be beneficial. Specifically, we can use blockchain (1) as a secure database; (2) to establish a selforganized spectrum market. Moreover, the challenges such as how to deploy the blockchain network over the cognitive radio network should be also addressed.


transaction verification, blockchain can be used to establish a self-organized spectrum market. For example, traditional spectrum auctions usually need a trusted authority to verify the authentication of the users, decide the winning user and settle the payment process. With blockchain, specifically, with the smart contract, spectrum auctions can be held in a secure, automatic and uncontroversial way. Moreover, with the security provided by blockchain, transactions can be made between users without any trust in each other. Thus, the high bar to obtain the spectrum resources can be reduced. Besides the spectrum access right, other property/services related to the spectrum management, such as the spectrum sensing, can also be traded between users by smart contracts.

• *Deployment of Blockchain*: The consensus algorithm, such as Proof of Work (PoW), through which a new block containing transactions can be added to the blockchain, is usually computationally intensive. Thus, it is needed to consider the limited computation capacity and battery of mobile users when deploying the blockchain over the traditional cognitive radio network. Based on this, we will provide three ways, including (1) enabling users to directly maintain a blockchain, (2) using a dedicated blockchain maintained by a third-party authority, and (3) allow users to simply offload the computation task to edge computing service providers (ECSPs) while keeping the verification and validation authority to themselves.

In summary, the blockchain technologies have been believed to bring new opportunities to the dynamic spectrum management, to hopefully improve the decentralization, security and autonomy, and reduce the administration cost. While challenges such as the energy consumption, the deployment and design of blockchain network over the traditional cognitive radio network should also be investigated. With a detailed and systematic investigation on blockchain technologies to dynamic spectrum management, which will be given in Chap. 5, we believe the directions will be more clear for the readers interested in the relevant researches.

#### **1.5 Artificial Intelligence for Dynamic Spectrum Management**

AI, which is a discipline to construct intelligent machine agents, has received increasing attention. AlphaGo, the most famous AI agent, has beaten many professional human players in Go games since 2015 [61]. In 2017, AlphaGo Master, the successor of Alpha Go, even defeated Ke Jie, who was the world No.1 Go player at the time. The concept of AI was proposed by John McCarthy in 1956, and its primary goal is to enable the machine agent to perform complex tasks by learning from the environment [62]. Nowadays, AI has become one of the hottest topic both in the academia and in the industry, and it is even believed to lead the development in the information age [63]. Specifically, the AI techniques have been successfully applied to many fields such as face and speech recognition. Moreover, AI techniques have shown potentials in the dynamic spectrum management, to improve the utilization of the increasingly congested spectrum.

Machine learning (ML), as the core technique of AI, has been greatly developed both in theories and applications [64]. Generally, there are three branches of the ML techniques.


The application of the AI techniques especially the above ML techniques to the next-generation communications networks has attracted a significant amount of attention of the telecommunication regulators. In the U.S., the FCC hosted a forum on AI and machine learning for 5G on Nov 30, 2018. In this forum, the panelists concluded that the AI techniques could improve network operations and would become a critical component in the next-generation wireless networks [65]. It is stated by A. Pai, the chairman of the FCC, that AI has the potentials to construct smarter communications networks and to improve the efficiency of the spectrum utilization [66]. In the UK, Ofcom also recognized AI and machine learning as powerful technologies to support 5G application scenarios such as ultra-reliable low-latency communications (URLLCs) [67].

Motivated by the superiority of AI, many research organizations have been investigating on the applications of AI techniques to the dynamic spectrum management [68]. The defense advanced research projects agency (DARPA) in the U.S. has held a 3-year grand competition called "spectrum collaboration challenge" (SC2) since 2017 [69]. The main objective of the SC2 is to imbue wireless communications with AI and machine learning so that intelligent strategies can be developed to optimize the usage of wireless spectrum resource in real time. Recent works from national institute of standards and technology (NIST) in the U.S. show that the AI-based method greatly outperforms the traditional methods on spectrum sensing [70].

Using the machine learning and AI techniques, the model-based schemes in the traditional DSM can be transformed into data-driven ones. In this way, DSM becomes more flexible and efficient. Specifically, we summarize the benefits of applying AI techniques to the DSM as follows.


However, there exist some challenges to achieve the AI-based DSM schemes. For example, it is needed for an AI-based scheme to differentiate the importance of data of different types in the radio environment. On that basis, the AI-based scheme is more likely to extract features useful for its objective. Moreover, since there exist huge computation overheads in the training of the existing AI techniques, how to accelerate computation to reduce the latency and the expense is also a matter of concern.

In summary, it is believed that the DSM would be achieved in a more efficient, robust, flexible way by applying AI and machine learning techniques. However, challenges in the implementations of AI-based DSM schemes should also be addressed. A detailed and systematic investigation on machine learning technologies and their applications to the DSM will be presented in Chap. 6.

#### **1.6 Outline of the Book**

DSM has been recognized as an effective way to improve the efficiency of spectrum utilization. In this book, three enabling techniques to DSM are introduced, including CR, blockchain and AI.

Chapters 2–4 discuss the CR techniques. Specifically, the Chap. 2 focuses on the CR for OSA. We start with a brief introduction on the OSA model and the functionality of sensing-access design at PHY and MAC layers. Then three classic sensing-access design problems are introduced, namely, sensing-throughput tradeoff, spectrum sensing scheduling, and sequential spectrum sensing. Furthermore, existing works on sensing-access design are reviewed. Finally, the application of the OSA to operating LTE in unlicensed band (LTE-U) is discussed.

Chapter 3 is especially used for discussing the spectrum sensing, which is the critical technique for OSA. We first provide the fundamental theories on spectrum sensing from the optimal likelihood ratio test perspective. Then, we review the classical spectrum sensing methods including Bayesian method, robust hypothesis test, energy detection, matched filtering detection, and cyclostationary detection. After that, we discuss the robustness of the classical methods and review techniques that can enhance the sensing reliability under hostile environment. Finally, we discuss the cooperative sensing that uses data fusion or decision fusion from multiple senors to enhance the sensing performance.

Chapter 4 focuses on the CR for CSA. We start with an introduction on the challenges existing in the CSA model. Then, the basic single-antenna CSA is presented and the optimal transmit power design under different types of power constraints is discussed. Furthermore, the multi-antenna CSA is presented and the channel information acquisition and transceiver beamforming are discussed. After that, the transmit and receive design for CR multiple access channel and broadcasting channel are presented, which is followed by the discussion of the robust design for the multi-antenna CSA. Finally, the application of CSA to operating LTE in the legacy licensed band, as known as spectrum refarming, is provided.

Chapter 5 presents the applications of blockchain techniques to support DSM. Generally, the use of the blockchain technologies can achieve the improvement of the decentralization and security, as well as the reduction in the administration cost. We investigate the basic principles of applying the blockchain technologies to spectrum management and practical implementation of blockchain over the CR network. Moreover, the recent literatures are reviewed, and the challenges and the future directions are also discussed.

Chapter 6 presents the applications of AI techniques to support DSM.With the help of AI techniques, the spectrum management would be achieved in a more flexible and efficient way, meanwhile obtaining performance improvement. This chapter starts with the basic principle of AI. Then, a review of ML techniques, including the statistical ML, deep learning and reinforcement learning is presented. After that, the recent applications of AI techniques in spectrum sensing, signal classification and dynamic spectrum access are discussed.

#### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 2 Opportunistic Spectrum Access**

**Abstract** Opportunistic spectrum access (OSA) model is one of the most widely used model for dynamic spectrum access. Spectrum sensing is the enabling function for OSA. The inability for a secondary user (SU) to perform spectrum sensing and spectrum access at the same time requires a joint design of sensing and access strategies to maximize SUs' own desire for transmission while ensuring sufficient protection to the primary users (PUs). This chapter starts with a brief introduction on the opportunistic spectrum access model and the functionality of sensing-access design at PHY and MAC layers. Then three classic sensing-access design problems are introduced, namely, sensing-throughput tradeoff, spectrum sensing scheduling, and sequential spectrum sensing. Finally, the application of the opportunistic spectrum access to operating LTE in unlicensed band (LTE-U) is discussed.

#### **2.1 Introduction**

The opportunistic spectrum access (OSA) model, also referred to as interweave paradigm in [1] or spectrum overlay in [2], is probably the most appealing model for unlicensed/secondary users to access the radio spectrum. In this model, the secondary users (SUs) opportunistically access the spectrum bands of primary users (PUs) which are temporally unused. Enabling the unlicensed use of the spectrum while guaranteeing the priority of licensed users, the OSA model has received great attention from both the research and the regulatory organizations.

By definition, before transmission, the SUs in the OSA model need to know the busy/idle status of the spectrum bands which they are interested in. With such knowledge, the SUs can access the unused spectrum bands of the PUs, i.e., the spectrum holes, or the spectrum white space so that the PUs' QoS will not be degraded. As introduced in Chap. 1, such knowledge can be acquired using two approaches, including the use of a geolocation database and spectrum sensing technique. The former approach can be applied when the PU's spectrum usage is highly predictable [3, 4] and the PUs are willing to publicize the spectrum usage, possibly for improving the spectrum utilization [5]. However, when the spectrum usage of the PUs might not

**Fig. 2.1** Key functions of the PHY and MAC layer in the OSA model

be predictable or the PUs are unwilling to share such information, spectrum sensing becomes a critical way to detect the available spectrum which enables the operation of the OSA model.

Spectrum sensing based OSA design has gone through a thriving development from the academia. One batch of works have focused on improving the accuracy of spectrum sensing, while others have focused on the coordination of spectrum sensing and access, i.e., the sensing-access design. As essentially a signal detection technique, spectrum sensing might lead to incorrect results due to the noise uncertainty and the channel effects such as multipath fading and shadowing. The accuracy of spectrum sensing is however crucial in the detection of spectrum holes and the protection of PUs. Thus, a lot of works have focused on the design of efficient detection algorithms or the collaboration of SUs for the diversity gain [6–11]. The detailed introduction on spectrum sensing techniques will be given in Chap. 3, while in this chapter, we will discuss the other important aspect of OSA design which is the sensing-access design. The sensing-access structure of the OSA reveals that the spectrum access is largely dependent on the results of spectrum sensing. Moreover, the optimization of the performance on spectrum sensing and access might be conflicting with some practical concerns such as the limited computational capability of the SUs, which gives rise to the tradeoff design between the spectrum sensing and access.

In Fig. 2.1, we illustrate the key functions for the physical (PHY) and medium access control (MAC) layers of the CR networks (CRNs). In the PHY layer, spectrum sensing enables the SUs to detect the spectrum holes, while the access control optimizes the transceiver design with respect to the carrier frequency, the modulation and coding scheme, etc. In the MAC layer, there are mainly two functions, including the sensing scheduling and access scheduling. The former determines when, on which channel, how long and how frequently the spectrum sensing should be implemented, while the latter governs the access of multiple users to the detected spectrum holes. A coordinator of the two functions, called as the sensing-access coordinator, is established. In the following sections, we will investigate three classic problems in the sensing-access design by first presenting their basic ideas and concerns, and then reviewing the existing literatures on solving them.

#### **2.2 Sensing-Throughput Tradeoff**

Due to the half duplex operation of a transceiver, an SU cannot perform spectrum sensing and access at the same time. As a result, it has to alternate between sensing operation and access operation within a data frame. Assuming that the spectrum sensing is performed periodically in each frame, the frame structure for the SU is illustrated in Fig. 2.2. Denote τ as the spectrum sensing time and *T* as the frame length. Then the time duration left for potential spectrum access is thus *T* − τ . Intuitively, with longer sensing time, the accuracy of spectrum sensing can be improved and it is higher chance that the status of the spectrum can be correctly detected. However, this reduces the time left for spectrum access and thus affects the throughput of the SU. Therefore, there is a tradeoff between spectrum sensing and throughput. This problem of sensing-throughput tradeoff is investigated in [12]. In the following, the basic formulation of such a problem is first presented. Extension to the case when cooperative spectrum sensing is employed is then followed.

#### *2.2.1 Basic Formulation*

**Fig. 2.2** Frame structure for periodic spectrum sensing

The performance of spectrum sensing is characterized by two performance metrics, namely, the *probability of false alarm Pf* (i.e., the probability of detecting the PU as being present when the PU is actually absent) and the *probability of detection Pd* (i.e. the probability of detecting the PU as being present when the PU is present). The decision whether to access the spectrum depends on the result of spectrum sensing. There are two scenarios when the SU could access the spectrum.


The average throughput of the secondary network can be calculated by taking into consideration the achievable throughput for both scenarios

$$R = R\_0 + R\_1 \tag{2.1}$$

where *R*<sup>0</sup> is the amount of the throughput contributed by the first scenario whereas *R*<sup>1</sup> is the one contributed by the second scenario. Denote *P*(H0) as the probability that the PU is absent. Denote the *C*<sup>0</sup> and *C*<sup>1</sup> as the throughout of the SU when it continuously transmits in the first scenario and second scenario, respectively. Then, *R*<sup>0</sup> and *R*<sup>1</sup> can be expressed as follows

$$R\_0(\epsilon, \tau) = P(\mathcal{H}\_0) \frac{T - \tau}{T} C\_0(1 - P\_f(\epsilon, \tau)) \tag{2.2}$$

$$R\_1(\epsilon, \tau) = (1 - P(\mathcal{H}\_0)) \frac{T - \tau}{T} C\_1 (1 - P\_d(\epsilon, \tau)) \tag{2.3}$$

where is the threshold of energy detection for spectrum sensing. Since both the threshold and the sensing time τ affect the accuracy of spectrum sensing, *Pf* and *Pd* are functions of (, τ ) and so are *R*<sup>0</sup> and *R*1.

Note that different from the first scenario, in the second scenario, the SU transmits in the presence of the PU. Hence, in general, we have *C*<sup>0</sup> > *C*1. Furthermore, it is typically more beneficial to explore the spectrum that is underutilized, for example, when *P*(H0) ≥ 0.5. Therefore, it can safely assume that *R*<sup>0</sup> dominates the overall throughput *R*. Hence, *R*(, τ ) ≈ *R*0(, τ ).

The problem of sensing-throughput tradeoff is to optimize the spectrum sensing parameters to maximize the achievable throughput of the SU subject to that the PU is sufficiently protected. Mathematically, the problem can be expressed as

$$\max\_{\epsilon,\tau} \quad \mathcal{R}(\epsilon,\tau) \approx P(\mathcal{H}\_0) \frac{T-\tau}{T} C\_0 (1 - P\_f(\epsilon,\tau)) \tag{2.4a}$$

$$\text{s.t.}\ \ P\_d(\epsilon, \mathfrak{r}) \ge \bar{P}\_d \tag{2.4b}$$

$$0 < \mathfrak{r} < T,\tag{2.4c}$$

where *P*¯ *<sup>d</sup>* is the target probability of detection. It has been proved in [12] that the above optimization achieves its optimality when the constraint (2.4b) is satisfied with equality.

Note that the above formulation highly depends on the two performance metrics of spectrum sensing, i.e., *Pd* and *Pf* . The former can be considered as an indication to the level of protection to the PU since a higher probability of detection reduces the chance that the SU accesses the spectrum over which the PU is operating; whereas the latter is related to the amount of transmission opportunities for the SUs since the lower the false alarm, the better that the SU can reuse the spectrum. These two metrics in the form of *Pd* and 1 − *Pf* are conflicting with each other. For example, for a given detection scheme, an increase in the probability of detection can improve the protection to the PU; however, this is achieved at the expense of increasing probability of false alarm which leads to decreasing spectrum access opportunities to the SU.

#### 2.2 Sensing-Throughput Tradeoff 23

By using energy detection and setting *Pd* = *P*¯ *<sup>d</sup>* , the probability of false alarm can be expressed as [12]

$$P\_f(\tau) = \mathcal{Q}\left(\sqrt{2\gamma + 1}\mathcal{Q}^{-1}(\bar{P}\_d) + \sqrt{\tau f\_s}\mathcal{Y}\right) \tag{2.5}$$

where γ is the received signal-to-noise ratio (SNR) of the primary signal, τ is the sensing time and *fs* is the sampling rate.

Then the above optimization problem reduces to an optimization problem with only a single variable τ with the objective function given as follows

$$R(\tau) \approx C\_0 P(\mathcal{H}\_0) \left( 1 - \frac{\tau}{T} \right) \left( 1 - \mathcal{Q} \left( \sqrt{2\gamma + 1} \mathcal{Q}^{-1} \left( \bar{P}\_d \right) + \sqrt{\tau f\_s} \gamma \right) \right) \tag{2.6}$$

It has been proved in [12] that under certain regulating assumptions on the form and distribution of the noise and the primary and secondary signals there exists an optimal sensing time that maximizes the achievable throughput of the secondary network.

Consider the scenario when *P*(H0) = 0.8, *T* = 100 ms and *fs* = 6MHz. The probability of false alarm *Pf* and the normalized achievable throughput *R*/*C*0*P*(H0) of the secondary network are plotted with respect to the spectrum sensing time τ in Figs. 2.3 and 2.4, respectively, under different received SNRs of the primary signal γ . As expected, it can be seen from Fig. 2.3 that with longer sensing time, the quality of spectrum sensing improves and thus the probability of false alarm decreases. However, this leads to a reduction in the available spectrum access time. Overall, it can be observed from Fig. 2.4 that there is an optimal sensing time which maximizes the achievable throughput. Furthermore, it can be observed that when γ decreases

**Fig. 2.3** Probability of false alarm *Pf* versus sensing time τ under different received SNRs γ of the primary signal

**Fig. 2.4** Normalized achievable throughput *R*/(*C*0*P*(H0)) versus sensing time τ under different received SNRs γ of the primary signal

which indicates more stringent spectrum sensing requirement, the SU has to devote more time for spectrum sensing in order to protect the PU which leads increased optimal spectrum sensing time and reduced maximum achievable throughput.

#### *2.2.2 Cooperative Spectrum Sensing*

The above formulation considers the case where the result of spectrum sensing is determined by a single SU. When there are multiple nearby SUs, spectrum sensing can be improved by combining the sensing result of these users. Thus, the quality of spectrum sensing does not only depend on the detection threshold and the sensing time τ but also the way how the individual sensing results are combined, i.e., the fusion rule. In [13], the basic formulation of the sensing-throughput problem is extended to the case that cooperative spectrum sensing is used.

Assume that there are *N* SUs participating in cooperative spectrum sensing and reporting their individual sensing result to the fusion center. Consider that *k*-out-of-*N* fusion rule [11] is used by which the channel is detected to be busy if there are at least *k* out of *N* users that detect so. The quality of spectrum sensing thus depends on the parameter *k* of the fusion rule. The overall probability of false alarm and the probability of detection are given by

$$\mathbf{P}\_f(\epsilon, \mathfrak{r}, k) = \sum\_{i=k}^{N} \binom{N}{i} P\_f(\epsilon, \mathfrak{r})^i \left(1 - P\_f(\epsilon, \mathfrak{r})\right)^{N-i} \tag{2.7}$$

#### 2.2 Sensing-Throughput Tradeoff 25

and

$$\mathbf{P}\_d(\epsilon, \tau, k) = \sum\_{i=k}^{N} \binom{N}{i} P\_d(\epsilon, \tau)^i \left(1 - P\_d(\epsilon, \tau)\right)^{N-i},\tag{2.8}$$

respectively.

Then the basic formulation in (2.4) can be revised to the following problem

$$\max\_{\epsilon,\tau,k} \ R(\epsilon,\tau,k) \approx P(\mathcal{H}\_0) \frac{T-\tau}{T} C\_0 \left(1 - \mathbf{P}\_f(\epsilon,\tau,k)\right) \tag{2.9a}$$

$$\text{s.t. } \mathbf{P}\_d(\epsilon, \tau, k) \ge P\_d \tag{2.9b}$$

$$0 < \mathfrak{r} < T \tag{2.9c}$$

$$0 \le k \le N. \tag{2.9d}$$

Similar to the basic formulation, it is proved in [14] that optimality is achieved with (2.9b) satisfied in equality. Then any fixed *k* value, the value of *Pd* (, τ ) for the individual SU, denoted by *P*¯ *<sup>d</sup>* , that satisfies (2.9b) in equality can be found. For any given value of *P*¯ *<sup>d</sup>* , *Pf* is related to *P*¯ *<sup>d</sup>* by (2.5). Then the above optimization problem can be reduced to an optimization problem of only two variables (τ, *k*). In [13], an iterative algorithm is proposed to compute the optimal value of (τ, *k*).

#### **2.3 Spectrum Sensing Scheduling**

In the sensing-throughput tradeoff problem, spectrum sensing time is optimized by considering a fixed frame duration. In the above formulation, it implicitly assumes that the status of the PU remains unchanged throughout the entire frame. In other words, it implies that the SU has be in synchronization with the PU's frame. This may not be easy to achieve if the PU refuses to cooperate to provide such synchronization information. In this case, for a periodic spectrum sensing scheme, the duration of the frame, which determines how frequent spectrum sensing is scheduled, also affects the achievable throughput of the secondary network.

Intuitively, with a fixed sensing time, the longer the frame duration, the more the effective transmission time. This potentially leads to higher throughput. However, when the frame duration is long, there is higher chance that the PU's status will change during an SU's transmission. This may result in collision in the middle of the secondary transmission if the primary user becomes active. The throughput of the SU, in this case, will suffer. Therefore, the duration of the frame needs to be optimized to balanced the tradeoff between the PU protection and the SU performance. Such a problem is investigated in [15] and is presented in the following.

As mentioned in Sect. 2.2.1, there are two scenarios when the SU accesses the spectrum. Similar to the treatment in the sensing-throughput tradeoff problem, only the achievable throughput of the first scenario is considered since it is the dominating factor.

**Fig. 2.5** The considered scenario

Consider the first scenario when the primary user is not active during spectrum sensing as shown in Fig. 2.5. Assume that the primary user has an exponential onoff traffic model in which both the durations of the active and inactive periods are exponential distributed with the mean duration of μ<sup>1</sup> and μ0, respectively. Due to the memoryless property of the exponential distribution, without loss of generality, the end of the sensing slot can be considered as the starting time *t* = 0. Denote the instance that the primary user becomes active by *t*. Then the duration of time during which collision occurs is a random variable, which can be expressed as

$$\mathbf{x}(t) = \begin{cases} T - \tau - t, & 0 \le t \le T - \tau \\ 0, & t > T - \tau \end{cases} \tag{2.10}$$

Based on this, the average time that collision occurs in the frame can be calculated as follows

$$\bar{X} = \mathbb{E}\{\mathbf{x}(t)\} = \int\_0^{T-\tau} (T - \tau - t) \frac{1}{\mu\_0} \exp\left(-\frac{t}{\mu\_0}\right) dt \tag{2.11}$$

$$=T-\tau-\mu\_0\left(1-\exp\left(-\frac{T-\tau}{\mu\_0}\right)\right)\tag{2.12}$$

Then the normalized achievable throughput (normalized by *P*(H0)(1 − *Pf* )*C*0) of the SU in this scenario is

$$\tilde{R}(T) = \frac{T - \tau - \bar{\chi}}{T} = \frac{\mu\_0}{T} \left( 1 - \exp\left( -\frac{T - \tau}{\mu\_0} \right) \right) \tag{2.13}$$

Next, the collision probability for the PU will be derived. The average collision time within each active period of the primary user can be calculated as

$$\bar{\chi} = \frac{\bar{\chi}}{\Pr\{0 \le t \le T - \tau\}} = \frac{\bar{\chi}}{1 - \exp\left(-\frac{T - \tau}{\mu\_0}\right)}\tag{2.14}$$

Then the collision probability can be expressed as

$$P\_c^p(T) = \frac{\bar{\mathcal{Y}}}{\mu\_1} = \frac{1}{\mu\_1} \left( \frac{T - \tau}{1 - \exp\left(-\frac{T - \tau}{\mu\_0}\right)} - \mu\_0 \right) \tag{2.15}$$

The objective is to find the optimal frame duration to maximize the normalized achievable throughput subject to that the collision probability of the primary user is kept below a limit. Mathematically, it is expressed as

$$\max\_{T} \quad \tilde{R}(T) = \frac{\mu\_0}{T} \left( 1 - \exp\left( -\frac{T-\tau}{\mu\_0} \right) \right) \tag{2.16}$$

$$\text{s.t.}\ \ P\_c^p(T) \le \bar{P}\_c^p\tag{2.17}$$

$$T > \mathfrak{r} \tag{2.18}$$

Setting the derivative of *R*˜(*T* ) to zero, the stationary point of the objective function can be found as

$$T\_o = -\mu\_0 \left( 1 + \mathcal{W}\_{-1} \left( -\exp\left( -\frac{\mu\_0 + \tau}{\mu\_0} \right) \right) \right) \tag{2.19}$$

**Fig. 2.6** The normalized achievable throughput and the PU's collision probability at different frame durations *T*

where W−1(*x*) represents the negative branch of the Lambert's W function which solves the equation *<sup>w</sup>* exp(*w*) <sup>=</sup> *<sup>x</sup>* for *<sup>w</sup>* <sup>&</sup>lt; <sup>−</sup>1. If *<sup>P</sup> <sup>p</sup> <sup>c</sup>* (*To*) <sup>≤</sup> *<sup>P</sup>*¯ *<sup>p</sup> <sup>c</sup>* and *To* > τ , then *To* is the optimal frame duration.

Consider the scenario with the average inactive duration of μ<sup>0</sup> = 650 ms, the average active duration of μ<sup>1</sup> = 352 ms, the spectrum sensing time of τ = 1 ms and the target collision probability of *P*¯ *<sup>p</sup> <sup>c</sup>* = 0.1. Figure 2.6 shows the normalized achievable throughput and the PU's collision probability, respectively, with respect to the frame duration. It can be seen from the figure that in this scenario there is a unique frame duration that maximizes the normalized achievable throughput and at the same time satisfies PU's collision probability constraint.

#### **2.4 Sequential Spectrum Sensing**

Periodic spectrum sensing has been considered in the preceding two sections. In such a sensing framework, if a channel is sensed to be busy, the SU has to wait until the next frame to sense the same channel or another channel to identify any spectrum opportunity. This could result in delay in accessing the spectrum. Another approach that is fundamentally different from periodic spectrum sensing is the sequential spectrum sensing. In such a sensing framework, the SU will sequentially sense a number of channels without any additional waiting period in between before it decides which channel to transmit over. In this case, the SU can dynamically determine how many channels should be sensed before a transmission. This approach allows the SU to explore diversity in the occupancy among different licensed channels. Hence, in case that one channel is sensed to be busy, the SU can quickly identify a spectrum opportunity by continuing to sense other channels. Furthermore, it allows the SU to explore diversity in the secondary channel fading statistics so that the SU can possibly take advantage of a better channel to maximize its own desire.

Clearly, when more channels are sensed, there is higher chance to identify a channel with higher throughput. However, this may result in more energy and time wasted in spectrum sensing. A tradeoff has to be balanced between throughput and energy consumption. In [16], the sensing-access design for sequential spectrum sensing is investigated from an energy-efficiency perspective. In particular, the paper designs the sensing policy which determines when to stop sensing and start transmission, the access policy which determines how much power is used upon transmission and the sense order that determines which channel to sense next if the current channel is given up for transmission to maximize the overall energy efficiency of the entire sequential spectrum sensing process. In the following, the energy-efficient sensingaccess design with a fixed sensing order is first described and then extended to the case when sensing order is optimized.

#### *2.4.1 Given Sensing Order*

In [16], the authors consider the sequential spectrum sensing framework under which the SU can sequentially sense a maximum number of *K* channels of bandwidth *B* each. An example of the considered sequential channel sensing process when the sensing order is given according to the logical indices of the channels is illustrated in Fig. 2.7. For each channel, e.g., channel *k*, the SU will perform spectrum sensing to find if channel *k* is busy or idle, i.e., the status δ*<sup>k</sup>* of channel *k* with δ*<sup>k</sup>* = 1 and δ*<sup>k</sup>* = 0 representing channel *k* is busy and idle, respectively. If channel *k* is sensed to be busy, the SU will continue to sense the next channel. If channel *k* is sensed to be idle, the SU will continue to perform channel estimation to determine the channel gain *hk* and then decide whether to select a power level to transmit over this channel for a period of *Tk* or to continue to sense the next channel.

Under such a sequential spectrum sensing framework, a decision has to be made after sensing each channel before the SU decides to access a channel. At maximum, *K* decisions have to be made. Such a process can be modeled as a *K*-stage stochastic sequential decision-making problem, which consists the following basic components:


**Fig. 2.7** An illustration of the sequential spectrum sensing process in [16]

cost functions are defined, i.e., one for the throughput and the other for the energy consumption.

First, some notations are introduced. Denote *r* <sup>0</sup> *<sup>k</sup>* and *r* <sup>1</sup> *<sup>k</sup>* as the number of bits that can be transmitted over channel *k* when it is truly idle and truly busy, respectively. Furthermore,

$$r\_k^0 = T\_k B \log\_2 \left( 1 + SNR\_k / \Gamma \right) \tag{2.20}$$

where *SNRk* represents the SNR received at the SU-Rx, and is considered as the SNR gap to channel capacity. The received SNR can be further defined as

$$SNR\_k(h\_k, p\_t) = \frac{\rho\_k h\_k p\_t}{\iota \sigma^2} \tag{2.21}$$

where ρ*<sup>k</sup>* captures the propagation loss, ι is the link margin compensating the hardware process variation and imperfection, and σ<sup>2</sup> is the noise power at the receiver front end. Moreover, denote θ*<sup>k</sup>* as the probability that channel *k* is idle, i.e., θ*<sup>k</sup>* = Pr(δ*<sup>k</sup>* = 0), *Pf*,*<sup>k</sup>* as the probability of false alarm for channel *k*, *P*¯ *<sup>d</sup>*,*<sup>k</sup>* as the target probability of detection for channel *k*.

When channel *k* is sensed to be busy, the throughput is *g<sup>R</sup> <sup>k</sup>* (*sk* , *uk* ) = 0. Otherwise, it is defined as

$$\log\_k^R(\mathbf{s}\_k, \boldsymbol{\mu}\_k) = \mathbb{E}\_{\boldsymbol{\delta}\_k | \boldsymbol{\delta}\_k}[r\_k] = \boldsymbol{\alpha}\_k r\_k^0 + (1 - \boldsymbol{\alpha}\_k) r\_k^1 \approx \boldsymbol{\alpha}\_k r\_k^0 \tag{2.22}$$

where <sup>ω</sup>*<sup>k</sup>* <sup>=</sup> <sup>θ</sup>*<sup>k</sup>* (1−*Pf*,*<sup>k</sup>* ) θ*<sup>k</sup>* (1−*Pf*,*<sup>k</sup>* )+(1−θ*<sup>k</sup>* )(1−*P*¯ *<sup>d</sup>*,*<sup>k</sup>* ) is the *posterior* probability of <sup>δ</sup>*<sup>k</sup>* <sup>=</sup> 0 given δˆ *<sup>k</sup>* <sup>=</sup> 0. The approximation is due to the fact that *<sup>P</sup>*¯ *<sup>k</sup> <sup>d</sup>* ≈ 1 and *r* <sup>0</sup> *<sup>k</sup>* ≥ *r* <sup>1</sup> *<sup>k</sup>* [12]. The energy consumption at stage *k* is defined as

$$\mathbf{g}\_k^E(\mathbf{s}\_k, \boldsymbol{\mu}\_k) = \begin{cases} \mathbf{0}, & \text{if } \mathbf{s}\_k = \mathcal{T} \\ \tau p\_s + T\_k(\alpha p\_l + p\_c), \text{ if } \mathbf{s}\_k \neq \mathcal{T} \text{ and } \boldsymbol{\mu}\_k = p\_t \\ \tau p\_s, & \text{otherwise} \end{cases} \tag{2.23}$$

where *ps* is the sensing power, *pc* is the power consumed in various transceiver electronic circuits excluding the power amplifier (PA), α = ξ/ζ , ξ the peak-toaverage ratio of the PA, and ζ is the drain efficiency of the PA.

The objective of the energy-efficient sequential spectrum sensing is to find a sequence of functions φ = {μ1(*s*1), . . . , μ*<sup>K</sup>* (*sK* )}mapping each state *sk* into a control *uk* = μ*<sup>k</sup>* (*sk* ), such that the energy efficiency of the entire sequential spectrum sensing process is maximized. Mathematically, this can be expressed as

$$\max\_{\phi} \; \eta\_{\phi} = \frac{\mathbb{E}\left\{\sum\_{k=1}^{K} \mathbf{g}\_{k}^{\mathcal{R}} \left(\mathbf{s}\_{k}, \mu\_{k}(\mathbf{s}\_{k})\right)\right\}}{\mathbb{E}\left\{\sum\_{k=1}^{K} \mathbf{g}\_{k}^{\mathcal{E}} \left(\mathbf{s}\_{k}, \mu\_{k}(\mathbf{s}\_{k})\right)\right\}}\tag{2.24}$$

where the expectation is taken over *sk* , *k* = 1,..., *K*.

#### 2.4 Sequential Spectrum Sensing 31

The above problem is very difficult to solve in its current form as it consists of a ratio of two addictive cost functions. To tackle it, a new sequential decision-making problem is formed with the states and decisions defined same as above but the cost function at stage *k*, *k* = 1,..., *K*, which is more precisely considered as a reward function, defined as follows

$$\begin{aligned} G\_k(\mathbf{s}\_k, \boldsymbol{\mu}\_k; \boldsymbol{\lambda}) &= \mathbf{g}\_k^R(\mathbf{s}\_k, \boldsymbol{\mu}\_k) - \lambda \mathbf{g}\_k^E(\mathbf{s}\_k, \boldsymbol{\mu}\_k) \\ &= \begin{cases} 0, & \text{if } \mathbf{s}\_k = \boldsymbol{\mathcal{T}} \\ \overline{\mathcal{F}}\_k(\boldsymbol{h}\_k, p\_t, \boldsymbol{\lambda}) - \lambda \boldsymbol{\pi} p\_s, \text{ if } \mathbf{s}\_k \neq \boldsymbol{\mathcal{T}} \text{ and } \boldsymbol{\mu}\_k = p\_t \\ -\lambda \boldsymbol{\pi} p\_s, & \text{otherwise} \end{cases} \end{aligned} (2.25)$$

where λ ≥ 0 is a parameter which can be considered as the price per unit of energy consumption, and

$$\mathcal{F}\_k(h, p, \lambda) = \alpha\_k T\_k B \log\_2 \left( 1 + \mathcal{S} N \mathcal{R}\_k(h, p) / \Gamma \right) - \lambda T\_k(\alpha p + p\_c) \tag{2.26}$$

For a given value of λ, the objective of the new sequential decision-making problem is to find a sequence of functions, φ(λ) = {μ1(*s*1; λ), . . . , μ*<sup>K</sup>* (*sK* ; λ)}, mapping each (*sk* ; λ)into a control *uk* = μ*<sup>k</sup>* (*sk* ; λ)to maximize the expected long-term reward. Mathematically, this can be expressed as

$$\max\_{\phi(\lambda)} J\_{\phi(\lambda)}(\lambda) = \mathbb{E}\left\{ \sum\_{k=1}^{K} G\_k(s\_k, \mu\_k(s\_k, \lambda); \lambda) \right\} \tag{2.27}$$

Equation (2.27) is parameterized by λ. Clearly, for different values of λ, we have different parametric formulations and thus different resulting optimal policies. Each of the parametric formulations in (2.27) can be considered as an energy-aware design of the sensing-access policies since the throughput can be treated as the monetary reward and both the sensing energy and the transmission energy can be treated as cost. The beauty of this is that unlike the original problem in (2.24), the parametric formulation in (2.27) has an additive and separable structure, which allows dynamic programming to be applied.

By using backward induction, the optimal policy after observing channel *k* is idle, i.e., *sk* = (1, 0), can be found as

$$\mu\_k(s\_k;\lambda) = \begin{cases} \left[\frac{\alpha\_k B}{\lambda \alpha \ln(2)} - \frac{i\Gamma \sigma^2}{\rho\_k h\_k}\right]\_{p\_{\min}}^{p\_{\max}}, \text{ if } \mathcal{F}\_k^\*(h\_k, \lambda) \ge \mathbb{E}\left\{J\_{k+1}(s\_{k+1}; \lambda)\right\} \text{ or } k = N\\\mathcal{C}, & \text{otherwise} \end{cases} \tag{2.28}$$

where F <sup>∗</sup> *<sup>k</sup>* (*h*, λ) <sup>=</sup> max *<sup>u</sup>*∈[*pmin* ,*pmax* ] F*<sup>k</sup>* (*h*, *u*, λ) represents the maximum immediate net reward associated with transmitting over channel *k*, *pmin* is a predefined small positive power level and *pmax* is the maximum transmit power allowed due to the hardware limitation or some other regulations.

The above result has the following interpretations:


In [16], it has been shown that the original problem in (2.24) and the parametric formulation are related in such a way that ηφ<sup>∗</sup> = λ<sup>∗</sup> if and only if *J*<sup>φ</sup><sup>∗</sup> (λ∗) = 0. This shows that the energy-aware sensing-access design that has zero expected long-term reward is the most energy-efficient sensing-access design. In addition, the maximum energy efficiency is equal to the price for the energy-aware design in this case. Furthermore, based on the monotonicity of the function *J*<sup>φ</sup><sup>∗</sup> (λ), an iterative algorithm has been proposed to find the optimal λ∗.

Consider the following scenario for the sequential spectrum sensing with a bandwidth of *B* = 6MHz, noise power spectrum density of *N*0/2 = −204 dBW/Hz, noise figure of *N <sup>f</sup>* = 10 dB, noise power of σ<sup>2</sup> = *N*0*BN <sup>f</sup>* , distance between the SU-TX and the SU-RX of *d* = 200 m, carrier frequency of *fc*,<sup>1</sup> = 700MHz, *fc*,*k*+<sup>1</sup> − *fc*,*<sup>k</sup>* = *B* for *k* = 1,..., *K* − 1, propagation loss of ρ*<sup>k</sup>* = - *c* 4π*d fc*,*<sup>k</sup>* 2 , link margin of ι = 10 dB, bit error rate of *BER* <sup>=</sup> <sup>10</sup>−5, SNR gap to channel capacity of ≈ −ln(5*BER*) <sup>1</sup>.<sup>5</sup> , minimum transmit power of *pmin* = 1 mW, maximum transmit power of *pmax* = 166.62 mW, circuit power of *pc* = 210 mW, sensing power of *ps* = 110 mW, transmission or frame time of *T* = 100 ms, PU idle probability of θ*<sup>k</sup>* = 0.8, SU channel distribution of *hk* = {1, 2, 3, 4, 5} with probability of {0.64, 0.23, 0.09, 0.03, 0.01}, PU worst-case received SNR γ*<sup>k</sup>* = −20 dB, target probability of detection of *P*¯ *<sup>d</sup>*,*<sup>k</sup>* = 0.9, PAR of ξ = 6 dB, and drain efficiency of ζ = 0.35.

In Fig. 2.8, the achieved energy efficiency of the optimal sensing-access policies is compared with two suboptimal policies for *K* = 6 channels and the access time *Tk* = *T* . The first suboptimal scheme consists of a sensing policy that always transmits over the first available channel and an access policy based on adaptive power allocation. The second suboptimal scheme allows the exploration of diversity of multiple available channels but always uses the maximum transmit power. It can be seen from the figure that the optimal scheme outperforms both suboptimal ones.

In Fig. 2.9, the achieved energy-efficiency is plotted versus sensing time when the number of channels *k* varies. We can see that with more number of channels, the energy efficiency is improved and the optimal sensing time is reduced due to the increased channel diversity effect.

**Fig. 2.8** Energy efficiency versus sensing time for *K* = 6 channels (constant transmission time)

**Fig. 2.9** Energy efficiency versus sensing time (constant transmission time)

#### *2.4.2 Optimal Sensing Order*

The above formulation only considers the design of the sensing policy and the access policy in terms of power allocation strategy with a given sensing order. However, it can be modified to incorporate the design of the optimal sensing order. In this case, the SU will have to decide which channel to sense next if the current channel is given up for transmission. To do so, the state and decision have to be modified as follows.


The objective is to find a sequence of functions φ = {μ0(∅, 0), μ1(*si*<sup>1</sup> , 1), . . . , μ*<sup>K</sup>* (*siK* , ∅)}, with μ*<sup>k</sup>* , *k* = 0, 1,..., *K*, mapping each state (*sik* , *<sup>k</sup>* ) into a control *uk* = μ*<sup>k</sup>* (*sik* , *<sup>k</sup>* ), to maximize the energy efficiency of the whole process. Mathematically, this can be expressed as

$$\max\_{\phi} \eta = \frac{\mathbb{E}\left\{\sum\_{k=0}^{K} \mathbf{g}\_k^R \left(\mathbf{s}\_{i\_k}, \mathfrak{Q}\_k, \mu\_k(\mathbf{s}\_{i\_k}, \mathfrak{Q}\_k)\right)\right\}}{\mathbb{E}\left\{\sum\_{k=0}^{K} \mathbf{g}\_k^E \left(\mathbf{s}\_{i\_k}, \mathfrak{Q}\_k, \mu\_k(\mathbf{s}\_{i\_k}, \mathfrak{Q}\_k)\right)\right\}}\tag{2.29}$$

Similar to the case when the sensing order is fixed, the above problem can be related to a parametric formulation parameterized by λ. The optimal strategy for the parametric formulation can be found as follows

$$\begin{aligned} & \mu\_{k}(s\_{i\_{k}}, \Omega\_{k}; \lambda) \\ = & \left\{ \begin{bmatrix} \frac{\alpha \mathbb{I} \cdot B}{\lambda \alpha \ln(2)} - \frac{\iota \Gamma \sigma^{2}}{\rho\_{k} \ln \end{bmatrix} \right\}\_{P\_{\min}}^{P\_{\max}}, & \text{if } \mathcal{F}\_{i\_{k}}^{\*}(h\_{i\_{k}}, \lambda) \ge \max\_{j \in \Omega\_{k}} \mathbb{E}\{J\_{k+1}(s\_{j}, \Omega\_{k} - j; \lambda)\} \\ \text{argmax} & \mathbb{E}\{J\_{k+1}(s\_{j}, \Omega\_{k} - j; \lambda)\}, \text{ otherwise} \end{aligned} \tag{2.30}$$

Compared to the result for the case of given sensing order, the following conclusions can be drawn. First, the optimal power allocation has the same structure as (2.28). Second, the optimal sensing strategy also has a threshold structure due to the monotonicity of F <sup>∗</sup> *ik* (*hik* , λ). The condition F <sup>∗</sup> *ik* (*hik* , λ) ≥ max *j*∈*<sup>k</sup>* <sup>E</sup>{*Jk*+<sup>1</sup>(*sj*, *<sup>k</sup>* <sup>−</sup> *<sup>j</sup>*; λ)} indicates that sensing is stopped when the immediate net reward is greater than the expected future net reward of continuing sensing any of the remaining channels. Lastly, if

**Fig. 2.10** Energy efficiency versus sensing time with {θ1,...,θ*<sup>K</sup>* }={0.2, 0.4, 0.6, 0.7, 0.8} (constant transmission time)

the current channel is given up, the best channel to sense next is the one with the maximum expected future net reward.

Consider that there are *K* = 5 channels with the corresponding PU idle probability set as {θ1,...,θ*<sup>K</sup>* }={0.2, 0.4, 0.6, 0.7, 0.8} while other settings remain the same as above. Figure 2.10 compares the energy efficiency achieved by using the optimal sensing order at different values of sensing time with that achieved with two given sensing orders. It can be seen that the optimization of sensing order is important to improve the energy efficiency of the sequential spectrum sensing process.

#### **2.5 Applications: LTE-U**

An important application of OSA is the long-term evolution in unlicensed band (LTE-U), which is also known as licensed-assisted access (LAA) [17, 18]. The motivation of introducing LTE service in unlicensed band comes from the crisis of licensed spectrum exhausting of LTE service and the under-utilization of unlicensed bands, such as 5 GHz band which contains 500MHz radio resource and is mainly used by WiFi service. These bands can be excellent complementary spectrum for enhancing the LTE performance. Through carrier aggregation, the data information can be conveyed via licensed and unlicensed spectrum simultaneously, while the control signal can be still transmitted via licensed spectrum for QoS guarantee. Introducing LTE in unlicensed bands requires the LTE to be a fair and friendly neighbor of the incumbent WiFi in unlicensed bands. To achieve this goal, critical problems, including the protection to WiFi system, efficient coexistence between LTE and WiFi system, and efficient user association need to be addressed.

#### *2.5.1 LBT-Based Medium Access Control Protocol Design*

Since WiFi adopts contention-based media access, the access of LTE will introduce collision to the WiFi transmission. To mitigate this collision, listen-before-talk (LBT) scheme, which enables the LTE to monitor the channel status, can be adopted by LTE, which has been shown to be able to maintain the most advantages of LTE when coexisting with WiFi system [19]. Moreover, when LTE transmits on the channel, the WiFi users will keep silent and wait for the channel becomes idle. To guarantee the normal service of WiFi, the LTE should vacate from the channel after a period of data transmission and leave the channel to WiFi operation. Thus, we can see that the LBT-based MAC protocol of LTE-U should contain the periodic channel sensing phase which is followed by data transmission phase and channel vacating phase. In the channel sensing phase, the LTE monitors the channel idle/busy status. If the channel is sensed idle, the LTE transmits data for a period of time. After that, the LTE system vacates from the channel for WiFi transmission.

It can be seen that the LBT-based MAC protocol is similar to the sensingtransmission protocol of the typical OSA system, except that the channel vacating phase is absent in the latter one. This is because that in the typical OSA system, the primary system has higher priority to the secondary system, and thus the secondary system can only passively adapt than the transmission of primary system. In the LTE-U system, however, although the legacy WiFi system is protected, the secondary LTE system can actively control the transmission of WiFi by carefully designing the sensing period and the transmission time.

To protect WiFi services, the performance of the multiple WiFi users should be quantified with the coexistence of LTE. There are works on evaluating the performance of LTE-U via simulation [20–22]. To facilitate the theoretical analysis of LTE-U system, an LBT-based LTE-U MAC protocol is designed as shown in Fig. 2.11 [23]. In this protocol, τ*s*, τ*<sup>t</sup>* and τ*<sup>v</sup>* denote the spectrum sensing time, the LTE transmission time, and the LTE vacating time (WiFi transmission time), respectively. Moreover, the vacating time <sup>τ</sup>*<sup>v</sup>* contains <sup>γ</sup> (<sup>γ</sup> <sup>∈</sup> <sup>Z</sup>+) transmission periods (TPs), each of which contains the WiFi packet transmission time and its propagation delay. Assuming that the spectrum sensing result is perfect, the LBT-based MAC protocol can be specified as follows.

• Instead of sensing spectrum at the beginning of each frame, the LTE starts and keeps sensing from the beginning of the γ th TP in a frame. Once the channel is sensed to be idle and the TP is not completed, the LTE will send a dummy packet until the TP ends. By doing so, the WiFi packet arrived during the γ th TP will be deferred and the channel can be held by the LTE for the next frame.

**Fig. 2.11** An LBT-based MAC protocol design for LTE-U


Based on this protocol design, the performance of the LTE and the WiFi system can be theoretically quantified by mapping the protocol parameters to that in [24]. With the closed-form throughput and delay performance of WiFi, the protocol parameters, including the LTE transmission time and the frame length can be optimized in terms of maximizing the LTE throughput or the overall channel utilization.

#### *2.5.2 User Association: To be WiFi or LTE-U User?*

One important observation obtained in the performance analysis of the LBT-based LTE-U MAC protocol design is that when a batch of new users join in the networks, it is not always advantageous to be LTE-U users in terms of individual throughput of the new users or the overall channel utilization. The simulation results in [23] have shown that whether the new users should join in the LTE-U system or the legacy WiFi system to get a better performance is highly determined by the traffic load of the existing WiFi system, including the packet arrival rate and the number of WiFi users. Therefore, the user association, which determines the provider of the service for the new users, should be optimized.

In order to maximize the normalized throughput of the unlicensed band with guaranteeing the QoS of WiFi service, a joint resource allocation and user association problem is proposed for a heterogeneous network, where the LTE small cells opportunistically access the spectrum of WiFi system [25]. For solving the problem, a two-level learning-based framework is proposed with which the original problem is decomposed into two subproblems. The master level problem, which aims to optimize the transmission time of LTE, has been solved by a Q-learning based method; while the slave one, which aims to optimize the user association, has been solved by a game-theory based learning method. With the proposed scheme, each of the newly enrolled users can choose the optimal resource allocation strategy and the service provider autonomously.

Considering that the QoS of the LTE-U users is not guaranteed in the existing literatures, the authors in [26] study the provision of QoS guarantee for the LTE-U by jointly optimizing the resource allocation and the user association strategy. To address the QoS provision problem, the users in the LTE-U system are classified into best-effort users and QoS-preferred users, while the WiFi users are all treated as the best-effort users. When the QoS requirement of an LTE-U user can be satisfied, the user becomes the QoS-preferred user; otherwise, it will join in the WiFi system as a normal WiFi user to receive the best-effort service. By quantifying the performance matrices, including the throughput and delay of the WiFi user and the LTE-U user, the optimization problem is formulated with the objective of maximizing the number of QoS-preferred users, with guaranteeing the fair coexistence of WiFi and LTE-U users. To solve this problem, the original problem is equivalently decomposed into two subproblems, i.e., the sum-power minimization problem and the user association problem. For the former problem, the deep-cut ellipsoid method is used to optimize the LTE transmission time, subcarrier assignment, and power allocation. For the latter one, a successive user removal algorithm is proposed. This scheme can realize that all the LTE-U users are QoS guaranteed and the number of such users is maximized.

#### **2.6 Summary**

In this chapter, we have discussed in detail about the OSA technique from the basic OSA model based on which the sensing-access protocol is designed. The sensingthroughput tradeoff has been presented, with which the cooperative spectrum sensing, sensing scheduling and sequential spectrum sensing are introduced. As a recent application of OSA in practical networks, the LTE-U has been presented, in which several critical problems, including the MAC protocol design and optimization, the resource allocation, and the user association, have been addressed.

#### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 3 Spectrum Sensing Theories and Methods**

**Abstract** Spectrum sensing is a critical step in cognitive radio based DSM to learn the radio environment. Despite its long history, in the past decade, the study on spectrum sensing has attracted substantial interests from the wireless communications community. In this chapter, we first provide the fundamental theories on spectrum sensing from the optimal likelihood ratio test perspective, then we review the classical methods including Bayesian method, robust hypothesis test, energy detection, matched filtering detection, and cyclostationary detection. After that, we discuss the robustness of the classical methods and review techniques that can enhance the sensing reliability under hostile environment. These methods include eigenvalue based sensing method, covariance based detection method. Finally, we discuss the cooperative sensing that uses data fusion or decision fusion from multiple senors to enhance the sensing performance.

#### **3.1 Introduction**

As mentioned in Chap. 1, the basic idea of a cognitive radio is to support spectrum reuse or spectrum sharing, which allows the secondary networks/users to communicate over the spectrum allocated/licensed to the primary users (PUs) when they are not fully utilizing it. To do so, the secondary users (SUs) are required to frequently perform spectrum sensing, i.e., detecting the presence of the PUs. Whenever the PUs become active, the SUs have to detect the presence of them with a high probability, and vacate the channel or reduce transmit power within certain amount of time. For example, in the IEEE 802.22 standard [1–3], it is required for the SUs to detect the TV and wireless microphone signals and vacate the channel within two seconds once they become active. Furthermore, for TV signal detection, it is required to achieve 90% probability of detection and 10% probability of false alarm at signal-to-noise ratio (SNR) level as low as −20 dB [1].

Spectrum sensing has reborn as a very active research topic in the past decade despite its long history in signal detection field. Quite a few new sensing methods have been proposed which take into consideration practical requirements and constraints. In this chapter, we first provide the fundamental spectrum sensing theories from the optimal likelihood ratio test perspective, then review the classical methods including Bayesian method, robust hypothesis test, energy detection, matched filtering detection, and cyclostationary detection. After that, we discuss the robustness of the classical methods and review techniques that can enhance the sensing reliability under hostile environment, including eigenvalue based sensing method, and covariance based detection. Finally, we discuss the cooperative spectrum sensing techniques that uses data fusion or decision fusion to combine the sensing data from multiple senors.

#### *3.1.1 System Model for Spectrum Sensing*

We consider an SU spectrum sensor with *M* ≥ 1 antennas. A similar scenario is the multi-node cooperative sensing, if all *M* distributed nodes are able to send their observed signals to a central node. There are two hypotheses: H0, the PU is inactive; and H1, the PU is active. The received signal at antenna/node *i*, *i* = 1,..., *M*, is given by

$$\mathcal{H}\_0: \quad \mathfrak{x}\_i(n) = \eta\_i(n) \tag{3.1}$$

$$\mathcal{H}\_1: \quad \mathbf{x}\_i(n) = \mathbf{s}\_i(n) + \eta\_i(n) \tag{3.2}$$

where η*i*(*n*) is the received noise plus possible interference. At hypothesis H1, *si*(*n*) is the received primary signal at antenna/node *i*, which is the transmitted primary signal passing through the wireless channel to the sensing antenna/node. That is, *si*(*n*) can be written as

$$s\_i(n) = \sum\_{l=0}^{q\_i} h\_i(l)\tilde{s}\_{(n-l)}\tag{3.3}$$

where *s*˜(*n*) stands for the transmitted primary signal, *hi*(*l*) and *qi* denote the propagation channel coefficient and channel order from the PU to the *i*th antenna/node. For simplicity, it is assumed that the signal, noise, interference, and channel coefficients are all real numbers, though the theory and derivation can be directly extended to complex signals in most of the cases.

The objective of spectrum sensing is to choose one of the two hypotheses, H<sup>0</sup> or H1, based on the received signal samples at the SU sensor. The probability of detection, *Pd* , and probability of false alarm, *Pf a*, are defined as follows:

$$P\_d = P\left(\mathcal{H}\_1|\mathcal{H}\_1\right) \tag{3.4}$$

$$P\_{fa} = P\left(\mathcal{H}\_1|\mathcal{H}\_0\right) \tag{3.5}$$

where *P*(·|·) defines the conditional probability. There are interesting physical meanings for the above two probabilities: *Pd* defines how well the PU is protected when it is using the spectrum, while *Pf a* determines the opportunity the SU misses out when the PU is not using the spectrum. In general, a sensing algorithm is said to be "optimal" if it achieves the lowest *Pf a* for a given *Pd* with a fixed number of samples, though there could be other criteria to evaluate the performance of a sensing algorithm.

In order to apply both space and time processing, we stack the signals from the *M* antennas/nodes and *L* time samples to yield the following *M L* × 1 vectors:

$$\begin{aligned} \mathbf{x}(n) &= \begin{bmatrix} \mathbf{x}\_1(n) \dots \mathbf{x}\_M(n) & \mathbf{x}\_1(n-1) \dots \mathbf{x}\_M(n-1) \\ \dots \mathbf{x}\_1(n-L+1) \dots \mathbf{x}\_M(n-L+1) \end{bmatrix}^T \end{aligned} \tag{3.6}$$

$$\mathbf{s}(n) = \begin{bmatrix} s\_1(n) \dots s\_M(n) \ s\_1(n-1) \dots s\_M(n-1) \\ \dots s\_1(n-L+1) \dots s\_M(n-L+1) \end{bmatrix}^T \tag{3.7}$$

$$\dots s\_1(n-L+1)\dots s\_M(n-L+1)\big|\_{}^{}\tag{3.7}$$

$$\eta(n) = [\eta\_1(n)\dots\eta\_M(n)\ \eta\_1(n-1)\dots\eta\_M(n-1)\atop{}^{}\_{x}]$$

$$\dots \eta\_1(n - L + 1) \dots \eta\_M(n - L + 1) \text{]}^T \tag{3.8}$$

Based on the above vector forms, the hypothesis testing problem can be reformulated as

$$\mathcal{H}\_0: \mathbf{x}(n) = \mathfrak{y}(n), \quad n = 0, \ldots, N - 1. \tag{3.9}$$

$$\mathcal{H}\_1: \mathbf{x}(n) = \mathbf{s}(n) + \eta(n), \quad n = 0, \ldots, N - 1. \tag{3.10}$$

Accurate knowledge on the noise power σ<sup>2</sup> <sup>η</sup> is the key for many detection methods. Unfortunately, in practice, the noise uncertainty always presents. Due to the noise uncertainty [4–6], the estimated (or assumed) noise power may be different from the real noise power. Let the estimated noise power be σˆ <sup>2</sup> <sup>η</sup> = ασ<sup>2</sup> <sup>η</sup> . It is assumed that α (in dB) is uniformly distributed in an interval [−*B*, *B*], where *B* is called the noise uncertainty factor [5]. In practice, the noise uncertainty factor of a receiving device is typically in the range from 1 to 2 dB, but the environment/interference noise uncertainty can be much higher [5].

#### *3.1.2 Design Challenges for Spectrum Sensing*

The design of spectrum sensing methods in CR faces a few specific challenges including, among others, the following.

1. Low sensing SNR: A cognitive radio may need to sense the primary signal at very low SNR condition. This is to overcome the hidden node problem, i.e., a SU sensor hears very weak signal from the primary transmitter but can strongly interfere the primary receiver if it transmits (here the primary receiver looks like a hidden node). To avoid such interference, one solution is to require the SU sensor to have the capability to sense the presence of the primary signal at very low SNR. For example, in the 802.22 standard, the sensing sensitivity requirement is as low as −20 dB.


While there have been a lot of spectrum sensing methods in the literature [8–15], many of them are based on ideal assumptions, and cannot work well in a hostile radio environment. We need the spectrum sensing to be robust to the unknown and maybe time-varying channel, noise and interference.

#### **3.2 Classical Detection Theories and Methods**

In this section, we first provide the fundamental theories on spectrum sensing from the optimal likelihood ratio test perspective, then we review the classical methods including Bayesian method, robust hypothesis test, energy detection, matched filtering detection, and cyclostationary detection.

#### *3.2.1 Neyman–Pearson Theorem*

The Neyman–Pearson (NP) theorem [16–18] states that, for a given probability of false alarm, the test statistic that maximizes the probability of detection is the likelihood ratio test (LRT) defined as

$$T\_{LRT}(\mathbf{x}) = \frac{p(\mathbf{x}|\mathcal{H}\_1)}{p(\mathbf{x}|\mathcal{H}\_0)}\tag{3.11}$$

where *p*(·) denotes the probability density function (PDF), and **x** denotes the received signal vector that is the aggregation of **x**(*n*), *n* = 0, 1,..., *N* − 1. Such a likelihood ratio test decides H<sup>1</sup> when *TLRT* (**x**) exceeds a threshold γ , and H<sup>0</sup> otherwise.

#### 3.2 Classical Detection Theories and Methods 45

The major difficulty in using the LRT is its requirement on the exact distributions given in (3.11). Obviously, the distribution of random vector **x** under H<sup>1</sup> is related to the source signal distribution, the wireless channels, and the noise distribution, while the distribution of **x** under H<sup>0</sup> is related to the noise distribution. In order to use the LRT, we need to obtain the knowledge of the channels as well as the signal and noise distributions, which is practically difficult to realize.

If we assume that the channels are flat-fading, and the received source signal sample *si*(*n*)'s are independent over *n*, the PDFs in LRT are decoupled as

$$p(\mathbf{x}|\mathcal{H}\_1) = \prod\_{n=0}^{N-1} p(\mathbf{x}(n)|\mathcal{H}\_1) \tag{3.12}$$

$$p(\mathbf{x}|\mathcal{H}\_0) = \prod\_{n=0}^{N-1} p(\mathbf{x}(n)|\mathcal{H}\_0) \tag{3.13}$$

If we further assume that noise and signal samples are both Gaussian distributed, i.e., *η*(*n*) ∼ N(**0**, σ<sup>2</sup> <sup>η</sup> **I**) and **s**(*n*) ∼ N(**0**, **R***s*), the LRT becomes the estimator-correlator (EC) [16] detector for which the test statistic is given by

$$T\_{EC}(\mathbf{x}) = \sum\_{n=0}^{N-1} \mathbf{x}^T(n) \mathbf{R}\_s (\mathbf{R}\_s + \sigma\_\eta^2 \mathbf{I})^{-1} \mathbf{x}(n) \tag{3.14}$$

From (3.10), we see that **R***s*(**R***<sup>s</sup>* + 2σ<sup>2</sup> <sup>η</sup> **I**)−1**x**(*n*) is actually the minimum-meansquared-error (MMSE) estimation of the source signal **s**(*n*). Thus, *TEC*(**x**) in (3.14) can be seen as the correlation of the observed signal **x**(*n*) with the MMSE estimation of **s**(*n*).

The EC detector needs to know the source signal covariance matrix **R***<sup>s</sup>* and noise power σ<sup>2</sup> <sup>η</sup> . When the signal presence is unknown yet, it is unrealistic to have the knowledge on the source signal covariance matrix which is related to unknown channels.

#### *3.2.2 Bayesian Method and the Generalized Likelihood Ratio Test*

In practical scenarios, it is difficult to know the likelihood functions exactly. For instance, we may not know the noise power σ<sup>2</sup> <sup>η</sup> and/or source signal covariance **R***s*. Hypothesis testing in the presence of uncertain parameters is known as "composite" hypothesis testing. In classic detection theory, there are two main approaches to tackle this problem: the Bayesian method and the generalized likelihood ratio test (GLRT).

In the Bayesian method [16], the objective is to evaluate the likelihood functions needed in the LRT through marginalization, i.e.,

$$p(\mathbf{x}|\mathcal{H}\_0) = \int p(\mathbf{x}|\mathcal{H}\_0, \Theta\_0) p(\Theta\_0|\mathcal{H}\_0) d\Theta\_0 \tag{3.15}$$

where <sup>0</sup> represents all the unknowns when H<sup>0</sup> is true. Note that the integration operation in (3.15) should be replaced with a summation if the elements in <sup>0</sup> are drawn from a discrete sample space. Critically, we have to assign a prior distribution *p*(0|H0) to the unknown parameters. In other words, we need to treat these unknowns as random variables, and use their known distributions to express our belief in their values. Similarly, *p*(**x**|H1) can be defined. The main drawbacks of the Bayesian approach are listed as follows:


To make the LRT applicable, we may estimate the unknown parameters first and then use the estimated parameters in the LRT. Known estimation techniques could be used for this purpose. However, there is one major difference from the conventional estimation problem where we know that signal is present, while in the case of spectrum sensing we are not sure whether there is source signal or not (the first priority here is the detection of signal presence). At different hypothesis, H<sup>0</sup> or H1, the unknown parameters are also different.

The GLRT is one efficient method [16, 18] to resolve the above problem, which has been used in many applications, e.g., radar and sonar signal processing. For this method, the maximum likelihood estimation of the unknown parameters under H<sup>0</sup> and H<sup>1</sup> are first obtained as

$$
\hat{\Theta}\_0 = \arg\max\_{\Theta\_0} p(\mathbf{x}|\mathcal{H}\_0, \Theta\_0)
$$

$$
\hat{\Theta}\_1 = \arg\max\_{\Theta\_1} p(\mathbf{x}|\mathcal{H}\_1, \Theta\_1)
$$

where <sup>0</sup> and <sup>1</sup> are the set of unknown parameters under H<sup>0</sup> and H1, respectively. Then, the GLRT statistic is formed as

$$T\_{GLRT} = \frac{p(\mathbf{x}|\Theta\_1, \mathcal{H}\_1)}{p(\mathbf{x}|\hat{\Theta}\_0, \mathcal{H}\_0)}\tag{3.16}$$

Finally, the GLRT decides H<sup>1</sup> if *TGLRT* (**x**)>γ , where γ is a threshold, and H<sup>0</sup> otherwise.

It is not guaranteed that the GLRT is optimal or closes to the optimum when the sample size goes to infinity. Since the unknown parameters in <sup>0</sup> and <sup>1</sup> are highly dependent on the noise and signal statistical models, the estimations of them could be vulnerable to the modeling errors. Under the assumption of Gaussian distributed source signals and noises, and flat-fading channels, some efficient spectrum sensing methods based on the GLRT can be found in [19–21].

#### *3.2.3 Robust Hypothesis Testing*

The searching for robust detection methods has been of great interest in the field of signal processing and many others. In this section, we start from a general paradigm that is called robust hypothesis testing, then we review a few methods that are robust to certain impairments. In Sects. 3.3–3.5, we will discuss a few new methods, including eigenvalue based detections, covariance based detections, and cooperative sensing.

A useful paradigm to design robust detectors is the maxmin approach, which maximizes the worst case detection performance. Among others, two techniques are very useful for robust spectrum sensing: the robust hypothesis testing [22, 23] and the robust matched filtering [24, 25]. In the following, we will give a brief overview on them.

Let the PDF of a received signal sample be *f*<sup>1</sup> at hypothesisH<sup>1</sup> and *f*<sup>0</sup> at hypothesis H0. If we know these two functions, the LRT-based detection described in Sect. 3.2.2 is optimal. However, in practice, due to the channel impairment, noise uncertainty, and interference, it is very hard to obtain these two functions exactly. One possible situation is when we only know that *f*<sup>1</sup> and *f*<sup>0</sup> belong to certain classes. One such class is called the -contamination class given by

$$\begin{aligned} \mathcal{H}\_0: f\_0 \in F\_0, \ F\_0 &= \{ (1 - \epsilon\_0) f\_0^0 + \epsilon\_0 g\_0 \} \\ \mathcal{H}\_1: f\_1 \in F\_1, \ F\_1 &= \{ (1 - \epsilon\_1) f\_1^0 + \epsilon\_1 g\_1 \} \end{aligned} \tag{3.17}$$

where *f* <sup>0</sup> *<sup>j</sup>* (*j* = 0, 1) is the nominal PDF under hypothesis H*<sup>j</sup>* , *<sup>j</sup>* in [1, 0] is the maximum degree of contamination, and *gj* is an arbitrary density function. Assume that we only know *f* <sup>0</sup> *<sup>j</sup>* and *<sup>j</sup>* (an upper bound for contamination), *j* = 1, 2. The problem is then to design a detection scheme to minimize the worst-case probability of error (e.g., probability of false alarm plus probability of mis-detection), i.e., finding a detector ˆ such that

$$\hat{\Psi} = \arg\min\_{\Psi} \max\_{(f\_0, f\_1) \in F\_0 \times F\_1} \left( P\_{fa}(f\_0, f\_1, \Psi) + 1 - P\_d(f\_0, f\_1, \Psi) \right) \tag{3.18}$$

Hubber [22] proved that the optimal test statistic is a "censored" version of the LRT given by

$$\hat{\Psi} = T\_{CLRT}(\mathbf{x}) = \prod\_{n=0}^{N-1} r(\mathbf{x}(n)) \tag{3.19}$$

where

$$r(t) = \begin{cases} c\_1, & c\_1 \le \frac{f\_1^0(t)}{f\_0^0(t)}\\ \frac{f\_1^0(t)}{f\_0^0(t)}, & c\_0 < \frac{f\_1^0(t)}{f\_0^0(t)} < c\_1\\ c\_0, & \frac{f\_1^0(t)}{f\_0^0(t)} \le c\_0 \end{cases} \tag{3.20}$$

and *c*0, *c*<sup>1</sup> are nonnegative numbers related to 0, 1, *f* <sup>0</sup> <sup>0</sup> , and *f* <sup>0</sup> <sup>1</sup> [22, 26]. Note that if choosing *c*<sup>0</sup> = 0 and *c*<sup>1</sup> = +∞, the test is the conventional LRT with respect to nominal PDFs, *f* <sup>0</sup> <sup>0</sup> and *f* <sup>0</sup> 1 .

One special case is the robust matched filtering. We turn the model (3.10) into a vector form as

$$\mathcal{H}\_0: \mathbf{x} = \eta \tag{3.21}$$

$$\mathcal{H}\_{\mathbb{L}} \colon \mathbf{x} = \mathbf{s} + \eta \tag{3.22}$$

where **s** is the signal vector and *η* is the noise vector. Suppose that **s** is known. In general, a matched-filtering detection is *TM F* = **g***<sup>T</sup>* **x**. Let the covariance matrix of the noise be **R**<sup>η</sup> = E(*ηη<sup>T</sup>* ). If **R**<sup>η</sup> = σ<sup>2</sup> <sup>η</sup> **I**, it is known that choosing **g** = **s** is optimal. In general, it is easy to verify that the optimal **g** to maximize the SNR is

$$\mathbf{g} = \mathbf{R}\_{\eta}^{-1}\mathbf{s}.\tag{3.23}$$

In practice, the signal vector **s** may not be known exactly. For example, **s** may be only known to be around **s**<sup>0</sup> with some errors modeled by

$$||\mathbf{s} - \mathbf{s}\_0|| \le \Delta \tag{3.24}$$

where is an upper bound on the Euclidean-norm of the error. In this case, we are interested in finding a proper value for **g** such that the worst-case SNR is maximized, i.e.,

$$\hat{\mathbf{g}} = \arg\max\_{\mathbf{g}} \min\_{\mathbf{s} : ||s - s\_0|| \subseteq \Delta} \text{SNR}(\mathbf{s}, \mathbf{g}) \tag{3.25}$$

It was proved in [24, 25] that the optimal solution for the above maxmin problem is

$$\hat{\mathbf{g}} = (\mathbf{R}\_{\eta} + \delta \mathbf{I})^{-1} \mathbf{s}\_{0} \tag{3.26}$$

where δ is a nonnegative number such that δ<sup>2</sup>||**g**ˆ||<sup>2</sup> = .

It is noted that there are also researches on the robust matched filtering detection when the signal has other types of uncertainty [26]. Moreover, if the noise has uncertainties, i.e., **R**<sup>η</sup> is not known exactly, or both noise and signal have uncertainties, the optimal robust matched-filtering detector was also found for some specific uncertainty models in [26].

#### *3.2.4 Energy Detection*

If we further assume that **R***<sup>s</sup>* = σ<sup>2</sup> *<sup>s</sup>* **I**, the EC detection in (3.14) reduces to the wellknown energy detector (ED) [4, 27, 28] for which the test statistic is given as follows (by discarding irrelevant constant terms).

$$T\_{ED} = \frac{1}{N} \sum\_{n=0}^{N-1} \mathbf{x}^T(n)\mathbf{x}(n) \tag{3.27}$$

Note that for the multi-antenna/node case, *TE D* is actually the summation of energies from all antennas/nodes, which is a straightforward cooperative sensing scheme [29– 31].

The test statistic is compared with a threshold to make to decision. Obviously the threshold should be related to the noise power. Hence energy detection needs a priori information of the noise variance (power). It has been shown that energy detection is very sensitive to the inaccurate estimation of the noise power. We will give detailed discussion on this later.

From the derivation above, we know that *energy detection is the optimal detection if there is only one antenna, the signal and noise samples are independent and identically distributed (iid) Gaussian random variables, and the noise variance (power) is known*. Even if the signal and noise do not have Gaussian distribution, in most cases, energy detection is still approximately optimal for uncorrelated signal and noise at low SNR [32]. In general, the ED is not optimal if signals or noise samples are correlated.

The energy detection can be used in different ways and sometimes combined with other techniques.

(1) We can filter the signal before the energy detection is implemented. Let *f* (*l*) (*l* = 0, 1,..., *L*) be a filter or the combining of a bank of filters. The received signal after filtering is

$$\mathbf{y}\_i(n) = \sum\_{l=0}^{L} f(l)\mathbf{x}\_i(n-l) \tag{3.28}$$

The energy detection after the filtering is therefore

$$T\_{ED,Filter} = \frac{1}{N} \sum\_{n=0}^{N-1} \left||\mathbf{y}(n)||^2\right.\tag{3.29}$$

For practical applications, we can choose a narrowband filter or a bank of narrowband filters if we want to detect the signals in specific frequency bands.

(2) Energy detection can also be done in frequency domain. Let *Si*(*k*) be the power spectrum density (PSD) of the received signal *xi*(*n*). There are different methods to estimate the PSD including periodogram, multitaper method (MTM) [33, 34] and others. For the periodogram method, the received signal is divided into *P* non-overlapped blocks: *xi*,*<sup>p</sup>*(*n*) with length *N <sup>f</sup>* . Let *Xi*,*<sup>p</sup>*(*k*) be the discrete Fourier transform (DFT) of *xi*,*<sup>p</sup>*(*n*). Note that DFT can be computed by the fast Fourier transform (FFT). The PSD is estimated as

$$S\_i(k) = \frac{1}{P} \sum\_{p=1}^{P} \left| X\_{i,p}(k) \right|^2 \tag{3.30}$$

The test statistic of the frequency domain energy detection is:

$$T\_{ED,F} = \frac{1}{NM} \sum\_{i=1}^{M} \sum\_{k=0}^{N\_f - 1} S\_i(k) \tag{3.31}$$

Again the test statistic is compared with a threshold to make a decision and the threshold should be related to the noise power.

Among all the spectral estimation methods, the MTM is proved to achieve the performance close to the maximum likelihood PSD estimator [33, 34]. Thus it can be used to make a more accurate estimation of the PSD for spectrum sensing [35]. However, the computational complexity is also increased.

(3) The frequency domain energy detection can also be done in a more flexible way. Let ψ be a subset of set{0, 1,..., *N <sup>f</sup>* − 1}. We can select the signal frequency within the bin ψ for detection and may also give different weights to different antennas and frequencies [36]. The test statistic is therefore

$$T\_{ED,F} = \frac{1}{M|\psi|} \sum\_{i=1}^{M} \sum\_{k \in \psi} g\_{i,k} S\_i(k) \tag{3.32}$$

where *gi*,*<sup>k</sup>* is the weight for antenna *i* and frequency index *k*. This can have a better performance if we know that the interested signal power has peaks or is concentrated in the frequency bin ψ. For example, for ATSC signal detection, we know that the signal has a strong peak at the pilot. So we can choose the set ψ to be the frequency index around the pilot location. A special case is to choose just one frequency index nearest to the pilot location. In some OFDM based standards, the pilot subcarriers have higher power than other subcarriers. We can put more weights to the pilot subcarriers.

Another variation of the method is to change the averaging of the signal PSD to maximizing [37]. The test statistic becomes

$$T\_{ED,Max} = \max\_{k \in \psi} \frac{1}{M} \sum\_{i=1}^{M} S\_i(k) \tag{3.33}$$

The energy detection can also be done in other transform domains other than the Fourier transform domain. In general, let *Xi*,*<sup>T</sup>* (*k*) (*k* = 0, 1,..., *K* − 1) be the transformed signal of the original received signal *xi*(*n*). The transform domain energy

#### 3.2 Classical Detection Theories and Methods 51

detection is

$$T\_{ED,T} = \frac{1}{MK} \sum\_{i=1}^{M} \sum\_{k=1}^{K} \left| X\_{i,T}(k) \right|^2 \tag{3.34}$$

For example, the wavelet can be chosen as the transform, which turns to the wavelet multi-resolution detection [38]. In general, the transform should be chosen based on the signal and noise property or/and purpose of detection.

#### *3.2.5 Sequential Energy Detection*

In the discussions above, we assume that the sensing time (number of signal samples *N*) is a predefined fixed number. The detector is designed to have the optimal performance with the given sample size. In some applications, the sensing time may not be predefined. Our purpose is to design a detector that has the least average sensing time. A popular approach for this is the sequential detection [39–41].

In general, a sequential detector makes a decision whenever a new signal sample is available. For simplicity, here we consider single detector case. Let **x***<sup>k</sup>* = (*x*(0), *x*(1), . . . , *x*(*k* − 1))*<sup>T</sup>* be signal samples currently available at time *k*. The sequential detector calculates a test statistic *T* (**x***<sup>k</sup>* ). It then makes a decision using two thresholds:

$$T(\mathbf{x}\_k) \ge \boldsymbol{\chi}\_1, \mathcal{H}\_1 \tag{3.35}$$

$$T(\mathbf{x}\_k) \le \boldsymbol{\chi}\_0, \mathcal{H}\_0 \tag{3.36}$$

where γ<sup>1</sup> > γ<sup>0</sup> are some predefined thresholds. If the test statistic is between γ<sup>1</sup> and γ0, that is, γ<sup>0</sup> < *T* (**x***<sup>k</sup>* )<γ1, the detector does not make a decision on H<sup>1</sup> or H<sup>0</sup> yet: more samples are required. The detector will continue this process when new signal samples are available till a decision on H<sup>1</sup> or H<sup>0</sup> is reached.

Thus we do not know when the detection will finished to achieve the target sensing performance. The sensing time is a random number. It is proved that the average sensing time is shorter than the conventional methods (the Wald–Wolfowitz theorem [40]). However, the worst case sensing time could be much longer.

Using the energy detection for sequential detection is discussed in [40]. Some extensions and refinements can be found in [40].

#### *3.2.6 Matched Filtering*

If we assume that noise is Gaussian distributed and source signal**s**(*n*)is deterministic and known to the receiver, it is easy to show that the LRT in this case becomes the matched filtering based detector [16–18], for which the test statistic is

52 3 Spectrum Sensing Theories and Methods

$$T\_{MF} = R\_c \left(\frac{1}{\sqrt{N}} \sum\_{n=0}^{N-1} \mathbf{s}^\dagger(n)\mathbf{x}(n)\right) \tag{3.37}$$

The test statistic is compared with a threshold to make a decision. Obviously the threshold should be related to the noise power. Unlike energy detection, matched filtering (MF) is more robust to the inaccurate noise power estimation [6, 7]. MF is also widely used in other fields like radar signal processing.

Hence, theoretically *matched filtering is optimal if the signal is deterministic and known at the receiver*. The major difficulties in MF are the time delay, frequency offset and time dispersive channel.

For simplicity, we consider single antenna case here. In general, for a deterministic transmitted signal *s*(*n*), the received signal *x*(*n*) can be written as

$$\mathbf{x}(n) = \frac{e^{j2\pi\epsilon n}}{\sqrt{N}} \sum\_{l=0}^{L} h(l)\mathbf{s}(n-1-\tau) + \eta(n) \tag{3.38}$$

where τ is the timing error, is the normalized frequency offset and *h*(*l*) is the channel.

At ideal case of = 0, τ = 0, *L* = 0 and *h*(0) > 0,

$$T\_{MF} = \frac{h(0)}{\sqrt{N}} \sum\_{n=0}^{N-1} |s(n)|^2 + R\_\varepsilon \left( \frac{1}{\sqrt{N}} \sum\_{n=0}^{N-1} s^\*(n)\eta(n) \right) \tag{3.39}$$

In this case, MF is optimal.

In practical wireless communication applications, the CFO and timing error may not be zero, also the channel is most likely frequency selective: *L* > 0. So in general, the test statistic of MF should be expressed as

$$T\_{MF} = R\_{\varepsilon} \left( \frac{1}{\sqrt{N}} \sum\_{n=0}^{N-1} \sum\_{l=0}^{L} e^{j2\pi\epsilon n} s^\*(n) s(n-l-\tau) \right) + R\_{\varepsilon} \left( \frac{1}{\sqrt{N}} \sum\_{n=0}^{N-1} s^\*(n) \eta(n) \right) \tag{3.40}$$

The *CFO* , *timing error and frequency selective* are three major obstacles for the MF. Anyone of them could reduce the performance of MF dramatically.

To deal with the timing error, a commonly used solution is to averaging or taking the maximum of the test statistic at different time delays of the received signal. Let

$$T\_{MF}(\boldsymbol{\upsilon}) = R\_{\varepsilon} \left( \frac{1}{\sqrt{N}} \sum\_{n=0}^{N-1} \mathbf{s}^{\dagger}(n) \mathbf{x}(n + \boldsymbol{\upsilon}) \right) \tag{3.41}$$

be the test statistic of the signal with time delay −υ. Obviously the best value is υ = τ . If we do not know the value of τ , we can average on different υ or take the maximum, that is,

$$\hat{T}\_{MF,A} = \frac{1}{2\Delta} \sum\_{\upsilon=-\Delta}^{\Delta} T\_{MF}(\upsilon) \tag{3.42}$$

or

$$\hat{T}\_{MF,M} = \max\_{\upsilon=-\Delta}^{\Delta} T\_{MF}(\upsilon) \tag{3.43}$$

To tack the problem of CFO, we can modify the test of MF using the absolute value

$$T\_{MF} = \frac{1}{\sqrt{N}} \sum\_{n=0}^{N-1} |\mathbf{s}^\dagger(n)\mathbf{x}(n)| \tag{3.44}$$

This test is not affected by the carrier frequency offset.

Similarly we can also average or take the maximum of the absolute value test statistics to deal with the timing error problem.

Like the energy detection, the MF can also be implemented in frequency domain or transformed domain. Using the MF for ATSC signal detection is discussed in [42].

#### *3.2.7 Cyclostationary Detection*

Practical communication signals may have special statistical features. For example, digitally modulated signals have non-random components such as double sideness due to sine wave carrier and keying rate due to symbol period. Such signals have a special statistical feature called cyclostationarity, i.e., their statistical parameters vary periodically in time. This cyclostationarity can be extracted by the cyclic autocorrelation (CAC) or the spectral-correlation density (SCD) [43–45].

For simplicity, in this section we consider single antenna case, that is, *M* = 1. For notation simplicity, we omit the subscript for antenna. For a given α and time lag τ , the CAC of a signal *x*(*t*) is defined as

$$R\_x^a(\mathbf{r}) = \lim\_{\Delta \to \infty} \frac{1}{\Delta} \int\_{-\frac{\hbar}{2}}^{\frac{\hbar}{2}} \mathbf{x}\left(t + \frac{\mathbf{r}}{2}\right) \mathbf{x}^\*\left(t - \frac{\mathbf{r}}{2}\right) e^{-j2\pi at} \mathbf{d}t \tag{3.45}$$

where α is called a cyclic frequency. If there exists at least one non-zero α such that max<sup>τ</sup> |*R*<sup>α</sup> *<sup>x</sup>* (τ )| > 0, we say that *x*(*t*) exhibits cyclostationarity. The value of such α depends on the type of modulation, symbol duration, etc. For example, for a digitally modulated signal with symbol duration *Tb*, cyclostationary features exist at <sup>α</sup> <sup>=</sup> *<sup>k</sup> Tb* and <sup>α</sup> = ±<sup>2</sup> *fc* <sup>+</sup> *<sup>k</sup> Tb* , where *fc* is the carrier frequency, and *k* is an integer. Equivalently, we can define the SCD, the Fourier transform of the CAC, as follows:

$$S\_x^a(f) = \int\_{-\infty}^{\infty} R\_x^a(\tau) e^{-j2\pi f\tau} d\tau \tag{3.46}$$

In binary spectrum sensing or signal detection, there are two hypotheses: H0, signal absent; and H1, signal present. The received signal can be written as

$$\mathcal{H}\_0: \mathbf{y}(t) = \eta(t) \tag{3.47}$$

$$\mathcal{H}\_1: \mathbf{y}(t) = h(t) \otimes \mathbf{x}(t) + \eta(t) \tag{3.48}$$

where *x*(*t*) denotes the transmitted signal from the primary user, *h*(*t*) is the channel response, and η(*t*) is the additive noise.

When source signal *x*(*t*) passes through a wireless channel *h*(*t*), the received signal is impaired by the unknown propagation channel. It can be shown that the SCD function of *y*(*t*) is

$$S\_{\mathbf{y}}(f) = H(f + \alpha/2)H^\*(f - \alpha/2)S\_{\mathbf{x}}(f) \tag{3.49}$$

where ∗ denotes the conjugate, α denotes the cyclic frequency for *x*(*t*), *H*( *f* ) is the Fourier transform of the channel *h*(*t*), and *Sx* ( *f* ) is the SCD function of *x*(*t*). Thus, the unknown channel could have major impacts on the strength of SCD at certain cyclic frequencies.

Cyclostationary detection (CSD) is well studied when Nyquist rate signal samples are available. The rationale behind the CSD is that the signal *x*(*t*) has cyclostationarity, that is, there exists at least one non-zero cyclic frequency α such that *R*<sup>α</sup> *<sup>x</sup>* (τ ) = 0 for some τ , while the noise η(*t*) is a pure stationary process, that is, for any non-zero α *R*<sup>α</sup> <sup>η</sup> (τ ) = 0 for all τ 's, or equivalently *S*<sup>α</sup> <sup>η</sup> ( *f* ) = 0 for all *f* 's. In the following, we list the cyclic frequencies for some signals with cyclostationarity in practical applications [44, 45].

	- a. Amplitude-Shift Keying: *x*(*t*) = [ <sup>∞</sup> *<sup>n</sup>*=−∞ *an p*(*t* − *n <sup>s</sup>* − *t*0)] cos(2π *fct* + φ0). It has cyclic frequencies at *k*/ *s*, *k* = 0 and ±2 *fc* + *k*/ *s*, *k* = 0, ±1, ±2,....
	- b. Phase-Shift Keying: *x*(*t*) = cos[2π *fct* + <sup>∞</sup> *<sup>n</sup>*=−∞ *an p*(*t* − *n <sup>s</sup>* − *t*0)]. For BPSK, it has cyclic frequencies at *k*/ *s*, *k* = 0, and ±2 *fc* + *k*/ *s*, *k* = 0, ±1, ±2,.... For QPSK, it has cycle frequencies at *k*/ *s*, *k* = 0.

Let α<sup>0</sup> be a non-zero cyclic frequency such that *R*α<sup>0</sup> *<sup>x</sup>* (τ ) = 0 for some τ . Assume that the signal and noise are mutually independent. Then we have

$$\mathcal{H}\_0: \mathcal{R}\_\mathbf{y}^{a\_0}(\mathbf{r}) = 0 \tag{3.50}$$

$$\mathcal{H}\_{\text{l}} : R\_{\text{y}}^{a\_0}(\mathfrak{r}) = R\_{\text{x}}^{a\_0}(\mathfrak{r}) \neq 0 \text{, for some } \mathfrak{r} \tag{3.51}$$

In the frequency domain, this turns to

$$\mathcal{H}\_0: S\_\text{y}^{a\_0}(f) = 0 \tag{3.52}$$

$$\mathcal{H}\_1: S\_\circ^{a\_0}(f) = S\_x^{a\_0}(f) \neq 0, \text{for some } f \tag{3.53}$$

Therefore, H<sup>0</sup> and H<sup>1</sup> can be distinguished by generating a test statistic from the CAC/SCD of the received signal at cyclic frequency α<sup>0</sup> and comparing the test statistic with a threshold. A typical test statistic isC<sup>1</sup> = |*R*<sup>α</sup><sup>0</sup> *<sup>y</sup>* (τ )| 2dτ or equivalently C<sup>1</sup> = |*S*<sup>α</sup><sup>0</sup> *<sup>y</sup>* ( *f* )| 2d *f* .

In practice, the received signal is sampled and only limited number of samples are available. Let *Ts* be the sampling period and *N* be the number of samples. The discrete version of the CAC is

$$R\_\mathbf{y}^\alpha(kT\_\mathbf{s}) = \frac{1}{N} \sum\_{n=0}^{N-1} \mathbf{y}((n+k)T\_\mathbf{s})\mathbf{y}^\*(nT\_\mathbf{s})e^{-j2\pi anT\_\mathbf{s}}\tag{3.54}$$

where the lag *k* = 0, 1,..., *M* − 1 with *M* << *N*. Accordingly the discrete version of the test statistic is

$$\mathcal{C}\_1 = \sum\_{k=0}^{M-1} |\mathcal{R}\_\mathbf{y}^{a\_0}(kT\_s)|^2 \tag{3.55}$$

In CSD, the test statistic is compared with a threshold to make a decision. Intuitively the threshold should be related to noise power. Due to the difficulty in acquiring the accurate noise power in practice [7, 46, 47], we can use the maximum likelihood estimation of the noise power. The maximum likelihood estimation of the noise power is

$$\hat{\sigma}\_{\eta}^{2} = \frac{1}{N} \sum\_{n=0}^{N-1} |\mathbf{y}(nT\_{s})|^{2} \tag{3.56}$$

The threshold is thus chosen as βσˆ <sup>4</sup> <sup>η</sup> , where β is a scalar to meet the pre-defined probability of false alarm.

There are other different test statistics and decision rules (thresholds) for the CSD. Especially, if the signal has cyclostationarity at multiple cyclic frequencies, how to use them to form a single test statistic is an interesting problem. In [48, 49], a general structure based on the GLRT principle is proposed to use the multiple cyclic frequencies. However, the method needs very high complexity and also some priori information on the channel. Some simplified approaches have also been studied [50]. The use of the CSD for ATSC signal is proposed [51]. There are also researches on OFDM signal detections using the CSD [50, 52–54].

When interference exists, the CSD may still work well as long as the interference does not have the same cyclostationary feature as the primary signal. In general, the chance that the primary signal and the interference have the same cyclostationary feature is slim. That means CSD is robust to interference and noise uncertainty. Furthermore, it is possible to distinguish the signal type because different signal may have different non-zero cyclic frequencies.

Although cyclostationary detection has certain advantages, it also has some disadvantages:


#### *3.2.8 Detection Threshold and Test Statistic Distribution*

To make a decision on whether signal is present, we need to set a threshold γ for each proposed test statistic, such that certain *Pd* and/or *Pf a* can be achieved. For a fixed sample size *N*, we cannot set the threshold to meet the targets for arbitrarily high *Pd* and low *Pf a* at the same time, as they are conflicting to each other. Since we have little or no prior information on the signal (actually we even do not know whether there is a signal or not), it is difficult to set the threshold based on *Pd* . Hence, a common practice is to choose the threshold based on *Pf a* under hypothesis H0.

Without loss of generality, the test threshold can be decomposed into the following form: γ = γ1*T*0(**x**), where γ<sup>1</sup> is related to the sample size *N* and the target *Pf a*, and *T*0(**x**) is a statistic related to the noise distribution under H0. For example, for the energy detection with known noise power, we have

$$T\_0(\mathbf{x}) = \sigma\_\eta^2 \tag{3.57}$$

For the matched-filtering detection with known noise power, we have

$$T\_0(\mathbf{x}) = \sigma\_\eta \tag{3.58}$$

In practice, the parameter γ<sup>1</sup> can be set either empirically based on the observations over a period of time when the signal is known to be absent, or analytically based on the distribution of the test statistic under H0. In general, such distributions are difficult to find, while some known results are given as follows.

For energy detection defined in (3.27), it can be shown that for a sufficiently large values of *N*, its test statistic can be well approximated by the Gaussian distribution [14, 28], i.e.,

$$\frac{1}{NM}T\_{ED}(\mathbf{x}) \sim \mathcal{N}\left(\sigma\_{\eta}^{2}, \frac{2\sigma\_{\eta}^{4}}{NM}\right) \quad \text{under } \mathcal{H}\_{0} \tag{3.59}$$

Accordingly, for given *Pf a* and *N*, the corresponding γ<sup>1</sup> can be found as

$$\gamma\_1 = NM\left(\sqrt{\frac{2}{NM}}\mathcal{Q}^{-1}(P\_{fa}) + 1\right) \tag{3.60}$$

where

$$Q(t) = \frac{1}{\sqrt{2\pi}} \int\_{t}^{+\infty} e^{-u^2/2} \mathrm{d}u \tag{3.61}$$

For the matched-filtering detection defined in (3.37), for a sufficiently large *N*, we have

$$\frac{1}{\sqrt{\sum\_{n=0}^{N-1}||\mathbf{s}(n)||^2}} T\_{MF}(\mathbf{x}) \sim \mathcal{N}\left(0, \sigma\_\eta^2\right) \quad \text{under } \mathcal{H}\_0 \tag{3.62}$$

Thereby, for given *Pf a* and *N*, it can be shown that

$$\gamma\_1 = \mathcal{Q}^{-1}(P\_{fa}) \sqrt{\sum\_{n=0}^{N-1} ||\mathbf{s}(n)||^2} \tag{3.63}$$

For the GLRT-based detection, it can be shown that the asymptotic (as *N* → ∞) log-likelihood ratio is central chi-square distributed [16]. More precisely,

$$2\ln T\_{GLRT}(\mathbf{x}) \sim \chi^2\_r \quad \text{under } \mathcal{H}\_0 \tag{3.64}$$

where *r* is the number of independent scalar unknowns under H<sup>0</sup> and H1. For instance, if σ<sup>2</sup> <sup>η</sup> is known while **R***<sup>s</sup>* is not, *r* will be equal to the number of independent real-valued scalar variables in **R***s*. However, there is no explicit expression for γ<sup>1</sup> in this case.

#### **3.3 Eigenvalue Based Detections**

Eigenvalue based detections (EBD) was first proposed in [47, 57–60]. The method was later studied and refined in [19, 61–64]. EBD can be derived from different approaches such as the GLRT principle or information theory. Some examples on the derivations can be found in [19, 20, 64]. The threshold setting of the EBD needs random matrix theory [47, 57–60]. The EDB methods solve the noise uncertainty problem by using statistical covariance matrix to estimate the noise power. The method can detect signal without knowing explicit information of the signal. The method was also adopted by IEEE802.22 standard as a solution to detect TV and wireless microphone signals.

#### *3.3.1 The Methods*

We consider the same model as defined at the beginning of this chapter. Let *Nj* def = max *i* (*qi j*), zero-pad *hi j*(*k*) if necessary, and define

$$\mathbf{h}\_{j}(n) \stackrel{\text{def}}{=} [h\_{1j}(n), h\_{2j}(n), \dots, h\_{Mj}(n)]^T \tag{3.65}$$

We have [47]

$$\mathbf{x}(n) = \mathbb{E}\mathbf{s}(n) + \eta(n) \tag{3.66}$$

where <sup>H</sup> is a *M L* <sup>×</sup> (*N*<sup>ˆ</sup> <sup>+</sup> *P L*) (*N*<sup>ˆ</sup> def = *P j*=1 *Nj*) matrix defined as

$$\mathbb{HI} \stackrel{\text{def}}{=} \{ \mathbb{HI}\_1, \mathbb{HI}\_2, \dots, \mathbb{HI}\_P \}, \tag{3.67}$$

$$\mathbb{H}\_{j} \stackrel{\text{def}}{=} \begin{bmatrix} \mathbf{h}\_{j}(0) & \cdots & \cdots & \mathbf{h}\_{j}(N\_{j}) & 0 & \cdots & 0\\ 0 & \mathbf{h}\_{j}(0) & \cdots & \cdots & \mathbf{h}\_{j}(N\_{j}) & \cdots & 0\\ & & \ddots & & \ddots\\ & & 0 & \cdots & \mathbf{h}\_{j}(0) & \cdots & \cdots & \mathbf{h}\_{j}(N\_{j}) \end{bmatrix} \tag{3.68}$$

Note that the dimension of <sup>H</sup>*<sup>j</sup>* is *M L* <sup>×</sup> (*Nj* <sup>+</sup> *<sup>L</sup>*).

Define the statistical covariance matrices of the signals and noise as

$$\mathbf{R}\_x = \mathbf{E}(\mathbf{x}(n)\mathbf{x}^\dagger(n))\tag{3.69}$$

$$\mathbf{R}\_{\mathbf{s}} = \mathbf{E}(\mathbf{s}(n)\mathbf{s}^{\dagger}(n))\tag{3.70}$$

$$\mathbf{R}\_{\eta} = \mathrm{E}(\eta(n)\eta^{\dagger}(n))\tag{3.71}$$

We can verify that

$$\mathbf{R}\_{\mathbf{x}} = \mathbb{H}\mathbf{R}\_{\mathbf{s}}\mathbb{H}^{\uparrow} + \sigma\_{\eta}^{2}\mathbf{I}\_{ML} \tag{3.72}$$

where σ<sup>2</sup> <sup>η</sup> is the variance of the noise, and **I***M L* is the identity matrix of order *M L*.

Let the eigenvalues of **<sup>R</sup>***<sup>x</sup>* and <sup>H</sup>**R***s*H† be <sup>λ</sup><sup>1</sup> <sup>≥</sup> <sup>λ</sup><sup>2</sup> ≥···≥ <sup>λ</sup>*M L* and <sup>ρ</sup><sup>1</sup> <sup>≥</sup> <sup>ρ</sup><sup>2</sup> <sup>≥</sup> ···≥ ρ*M L* , respectively. Obviously, λ*<sup>n</sup>* = ρ*<sup>n</sup>* + σ<sup>2</sup> <sup>η</sup> . When there is no signal, that is, **s**(*n*) = 0 (then **R***<sup>s</sup>* = 0), we have λ<sup>1</sup> = λ<sup>2</sup> =···= λ*M L* = σ<sup>2</sup> <sup>η</sup> . Hence, λ1/λ*M L* = 1. When there is a signal, if ρ<sup>1</sup> > ρ*M L* , we have λ1/λ*M L* > 1. Hence, we can detect if signal exists by checking the ratio λ1/λ*M L* . Obviously, ρ<sup>1</sup> = ρ*M L* if and only if <sup>H</sup>**R***s*H† <sup>=</sup> <sup>λ</sup>**I***M L* , where <sup>λ</sup> is a positive number. From the definition of the matrix <sup>H</sup> and **<sup>R</sup>***s*, it is highly probable that <sup>H</sup>**R***s*H† <sup>=</sup> <sup>λ</sup>**I***M L* . In fact, the worst case is **<sup>R</sup>***<sup>s</sup>* <sup>=</sup> <sup>σ</sup><sup>2</sup> *s* **I**, that is, the source signal samples are iid. At this case, <sup>H</sup>**R***s*H† <sup>=</sup> <sup>σ</sup><sup>2</sup> *<sup>s</sup>* HH†. Obviously, σ2 *<sup>s</sup>* HH† <sup>=</sup> <sup>λ</sup>**I***M L* if and only if all the rows of <sup>H</sup> have the same power and they are co-orthogonal. This is only possible when *Nj* = 0, *j* = 1,..., *P* and *M* = 1, that is, the source signal samples are iid, all the channels are flat-fading and there is only one receiver.

Thus, if *M* > 1 (multiple antennas) or the channel has multiple paths or the source signal itself is correlated, the eigenvalues of the **R***<sup>x</sup>* are not identical, while at pure noise case, the **R***<sup>x</sup>* should have identical eigenvalues. Hence, we can check the eigenvalues of **R***<sup>x</sup>* to see if signal presents.

In practice, we only have finite number of samples. Hence, we can only obtain the sample covariance matrix other than the statistic covariance matrix. The sample covariance matrix is defined as

$$\mathbf{R}\_{\mathbf{x}}(N) \stackrel{\text{def}}{=} \frac{1}{N} \sum\_{n=L-1}^{L-2+N} \mathbf{x}(n)\mathbf{x}^{\dagger}(n) \tag{3.73}$$

where *N* is the number of collected samples. Based on the sample covariance matrix and its eigenvalues, a few methods have been proposed based on different prospectives [19, 47, 57–64]. Such methods are called eigenvalue based detections (EBD). Here we summarize the methods as follows.

Let λ<sup>1</sup> ≥ λ<sup>2</sup> ≥···≥ λ*M L* be the eigenvalues of the sample covariance matrix.

#### **Algorithm** Eigenvalue based detections

Step 1. Compute the sample covariance matrix as defined in (3.73).

Step 2. Calculate the eigenvalues of the sample covariance matrix.

Step 3. Compute a test statistic from the eigenvalues. There are different approaches to construct the test statistic. A few simple but effective method are as follows:

#### 1. **Maximum eigenvalue to trace detection (MET)**. The test statistic is

$$T\_{MET} = \lambda\_1 / \mathfrak{t}\_{\mathfrak{t}}(\mathbf{R}\_{\mathfrak{x}}(N)) \tag{3.74}$$

where tr(·) is the trace of a matrix, tr(**R***<sup>x</sup>* (*N*)) <sup>=</sup> *M L <sup>i</sup>*=<sup>1</sup> λ*<sup>i</sup>* . This method is also called blindly combined energy detection (BCED) in [60].

2. **Maximum to minimum eigenvalue detection (MME)** [47]. The test statistic is

$$T\_{MME} = \lambda\_1 / \lambda\_{ML} \tag{3.75}$$

#### 3. **Arithmetic to geometric mean (AGM)** [19]. The test statistic is

$$T\_{AGM} = \frac{1}{ML} \sum\_{i=1}^{ML} \lambda\_i / \left(\prod\_{i=1}^{ML} \lambda\_i\right)^{1/ML} \tag{3.76}$$

Step 4. Compare the test statistic with a threshold to make a decision.

All these methods do not use the information of the signal, channel and noise power as well. The methods are robust to synchronization error, channel impairment, and noise uncertainty.

#### *3.3.2 Threshold Setting*

To find a formula for the threshold is mathematically involved. In general we need to find the theoretical distribution of some combination of the eigenvalues of a random matrix. There have been some exciting works on this by using the random matrix theory [47, 61–63, 65–68]. For simplicity, in the following, we provide an example for the maximum eigenvalue detection (MED) with known noise power [59]. At this case, we actually compare the ratio of the maximum eigenvalue of the sample covariance matrix **R***<sup>x</sup>* (*N*) to the noise power σ<sup>2</sup> <sup>η</sup> with a threshold γ1. To set the value for γ1, we need to know the distribution of λ1(*N*)/σ<sup>2</sup> <sup>η</sup> for any finite *N*. Fortunately, the random matrix theory has laid the foundation to derive the distributions.

When there is no signal, **R***<sup>x</sup>* (*N*)reduces to **R**η(*N*), which is the sample covariance matrix of the noise only. It is known that **R**η(*N*) is a Wishart random matrix [69]. The study of the eigenvalue distributions for random matrices is a very hot research topic over recent years in mathematics, communications engineering, and physics [69–72]. The joint PDF of the ordered eigenvalues of a Wishart random matrix has been known for many years [69]. However, since the expression of the joint PDF is very complicated, no simple closed-form expressions have been found for the marginal PDFs of the ordered eigenvalues, although some computable expressions have been found in [73]. Recently, I. M. Johnstone and K. Johansson have found the distribution of the largest eigenvalue [70, 71] of a Wishart random matrix as described in the following theorem.

**Theorem 3.1** *Let* **<sup>A</sup>**(*N*) <sup>=</sup> *<sup>N</sup>* σ2 η **R**η(*N*)*,* μ = ( <sup>√</sup>*<sup>N</sup>* <sup>−</sup> <sup>1</sup> <sup>+</sup> <sup>√</sup>*M*)<sup>2</sup>*, and* <sup>ν</sup> <sup>=</sup> ( <sup>√</sup>*<sup>N</sup>* <sup>−</sup> <sup>1</sup> <sup>+</sup> <sup>√</sup>*M*)( <sup>√</sup> 1 *<sup>N</sup>*−<sup>1</sup> + <sup>√</sup> 1 *<sup>M</sup>* )<sup>1</sup>/<sup>3</sup>*. Assume that* lim *N*→∞ *M <sup>N</sup>* <sup>=</sup> *<sup>y</sup>* (<sup>0</sup> <sup>&</sup>lt; *<sup>y</sup>* <sup>&</sup>lt; <sup>1</sup>)*. Then,* <sup>λ</sup>*max* (**A**(*N*))−<sup>μ</sup> ν *converges (with probability one) to the Tracy–Widom distribution of order 1 [74, 75].*


**Table 3.1** Numerical table for the Tracy–Widom distribution of order 1

The Tracy–Widom distribution provides the limiting law for the largest eigenvalue of certain random matrices [74, 75]. Let *F*<sup>1</sup> be the cumulative distribution function (CDF) of the Tracy–Widom distribution of order 1. We have

$$F\_1(t) = \exp\left(-\frac{1}{2} \int\_t^\infty \left(q(u) + (u-t)q^2(u)\right) du\right) \tag{3.77}$$

where *q*(*u*) is the solution of the nonlinear Painlevé II differential equation given by

$$q''(u) = \iota q(u) + 2q^3(u)\tag{3.78}$$

Accordingly, numerical solutions can be found for function *F*1(*t*) at different values of *t*. Also, there have been tables for values of *F*1(*t*) [70] as shown in Table 3.1.

Using the above results, we can derive the probability of false alarm as

$$\begin{split} P\_{fa} &= P\left(\lambda\_1(N) > \gamma\_1 \sigma\_\eta^2\right) \\ &= P\left(\frac{\lambda\_{\max}(\mathbf{A}(N)) - \mu}{\nu} > \frac{\gamma\_1 N - \mu}{\nu}\right) \approx 1 - F\_1\left(\frac{\gamma\_1 N - \mu}{\nu}\right) \end{split} (3.79)$$

Thus we have

$$F\_1 \left(\frac{\gamma\_1 N - \mu}{\nu}\right) \approx 1 - P\_{fa} \tag{3.80}$$

or equivalently,

$$\frac{\gamma\_1 N - \mu}{\nu} \approx F\_1^{-1} (1 - P\_{fa}) \tag{3.81}$$

From the definitions of μ and ν in Theorem 3.1, we finally obtain the value for γ<sup>1</sup> as

$$\gamma\_1 \approx \frac{(\sqrt{N} + \sqrt{M})^2}{N} \left( 1 + \frac{(\sqrt{N} + \sqrt{M})^{-2/3}}{(NM)^{1/6}} F\_1^{-1} (1 - P\_{fa}) \right) \tag{3.82}$$

Note that γ<sup>1</sup> depends only on *N* and *Pf a*. A similar approach like the above can be used for the case of MME detection, as shown in [47, 68].

Figure 3.1 shows the expected (theoretical) and actual (by simulation) probability of false alarm values based on the theoretical threshold in (3.82) for *N* = 5000,

**Fig. 3.1** Comparison of theoretical and actual *Pf a*

*M* = 8, and *L* = 1. It is observed that the differences between these two sets of values are reasonably small, suggesting that the choice of the theoretical threshold is quite accurate.

#### *3.3.3 Performances of the Methods*

To show the performance and the robustness of the methods, here we give some simulation results for the EBDs. Comparison with the energy detection (ED) is also included. We consider two cases here: the signal is time uncorrelated and the signal is time correlated. The Receiver Operating Characteristics (ROC) curves (*Pd* versus *Pf a*) at SNR = −15 dB, *N* = 5000, and *M* = 4 are plotted at the two cases. The performance at first case in shown in Fig. 3.2 with *L* = 1 and that at the second case is shown in Fig. 3.3 with *L* = 6, where "ED-*u*dB" means energy detection with *u* dB noise uncertainty. In Fig. 3.3, the source signal is the wireless microphone signal [76] and a multipath fading channel (with eight independent taps of equal power) is assumed. For both cases, MET, MME and AGM perform better than ED. MET, MME and AGM are totally immune to noise uncertainty. However, the ED is very vulnerable to noise power uncertainty [4–6].

Obviously the eigenvalue based detections do not use the information of the signal, channel and noise power as well. The methods are robust to synchronization error, channel impairment, and noise uncertainty. However, like other blind detections, the methods are vulnerable to unknown narrowband interferences.

**Fig. 3.2** ROC curve: i.i.d source signal

**Fig. 3.3** ROC curve: wireless microphone source signal

#### **3.4 Covariance Based Detections**

Covariance based detections (CBD) was first proposed in [65, 77]. The method solved the noise uncertainty problem by using the online estimated noise power. The method can detect signal without knowing explicit information of the signal. The method was also adopted by IEEE802.22 standard for detecting TV signal and as the first choice for sensing the wireless microphone signals.

#### *3.4.1 The Methods*

As shown in the last section, the covariance matrix of the received signal can be written as

$$\mathbf{R}\_{\mathbf{x}} = \mathbb{H}\mathbf{R}\_{\mathbf{s}}\mathbb{H}^{\uparrow} + \sigma\_{\eta}^{2}\mathbf{I}\_{ML} \tag{3.83}$$

If the signal *s*(*n*) is not present, **R***<sup>s</sup>* = 0. Hence the off-diagonal elements of **R***<sup>x</sup>* are all zeros. If there is signal and the signal samples are correlated, **R***<sup>s</sup>* is not a diagonal matrix. Hence, some of the off-diagonal elements of **R***<sup>x</sup>* should be non-zeros.

In practice, the statistical covariance matrix can only be calculated using a limited number of signal samples. For notation simplicity, here we consider the case of single antenna/sensor *M* = 1, and drop the indices for antenna/sensor. Define the sample auto-correlations of the received signal as

$$r(l) = \frac{1}{N\_s} \sum\_{m=0}^{N\_x - 1} x(m)x(m - l), \ l = 0, 1, \dots, L - 1 \tag{3.84}$$

where *x*(*m*)is the received signal samples, and *Ns* is the number of available samples. The statistical covariance matrix **R***<sup>x</sup>* can be approximated by the sample covariance matrix **R***<sup>x</sup>* (*Ns*) as defined in the last section. At *M* = 1, **R***<sup>x</sup>* (*Ns*) can be formed by the auto-correlations *r*(*l*). Note that the sample covariance matrix is symmetric and Toeplitz.

Based on the generalized likelihood ratio test (GLRT) or information/signal processing theory, there have been a few methods proposed based on the sample covariance matrix. One class of such methods is called covariance based detections (CBD) [1, 65, 76, 77]. Some methods that directly use the auto-correlations of the signal can also be included in this class [78]. The covariance based detections directly use the elements of the covariance matrix to construct detection methods, which can reduce computational complexity. The methods are summarized in the following.

Let the entries of the matrix **R***<sup>x</sup>* (*Ns*) be *cmn* (*m*, *n* = 1, 2,..., *M L*).

#### **Algorithm** Covariance based detections

Step 1. Compute the sample covariance matrix **R***<sup>x</sup>* (*Ns*) as defined in (3.73).

Step 2. Construct a test statistic directly from the entries of the sample covariance matrix. In general, the test statistic of the CBD is

$$T\_{CBD} = \mathbf{F}\_1(\mathbf{c}\_{mn}) / \mathbf{F}\_2(\mathbf{c}\_{mm}) \tag{3.85}$$

where F1 and F2 are two functions. At single antenna/sensor case, it can be written equivalently as

$$T\_{CBD} = \mathcal{F}\_1(r(0), \dots, r(L-1)) / \mathcal{F}\_2(r(0), \dots, r(L-1)) \tag{3.86}$$

There are many ways to choose the two functions. Some special cases are shown in the following.

1. **Covariance absolute value detection (CAVD)**. The test statistic is

$$T\_{CAVD} = \sum\_{m=1}^{ML} \sum\_{n=1}^{ML} |c\_{mn}| / \sum\_{m=1}^{ML} |c\_{mm}| \tag{3.87}$$

2. **Maximum auto-correlation detection (MACD)**. The test statistic is

$$\left| T\_{MACD} = \max\_{m \neq n} |c\_{mn}| / \sum\_{m=1}^{ML} |c\_{mn}| \tag{3.88}$$

#### 3. **Fixed auto-correlation detection (FACD)**: The test statistic is

$$|T\_{FACD} = |c\_{m \diamond n\_{\mathbb{O}}}| / \sum\_{m=1}^{ML} |c\_{mm}| \tag{3.89}$$

where *m*<sup>0</sup> and *n*<sup>0</sup> are fixed numbers between 1 and *M L*. At single antenna case, the detection can be written equivalently as

$$T\_{FACD} = |r(l\_0)|/r(0)\tag{3.90}$$

This detection is especially useful when we have some prior information on the source signal correlation and knows the lag that produces the maximum autocorrelation. For example, it can be used for detect the OFDM signal by using the CP or pilot property [52]. -

Step 3. Compare the test statistic with a threshold to make a decision.

All these methods do not use the information of the signal, channel and noise power as well. The methods are robust to synchronization error, channel impairment, and noise uncertainty.

The test statistic is compared with a threshold γ to make a decision. The threshold γ is determined based on the given *Pf a*. To find a formula for the thresholds is mathematically involved [65, 77]. We will show an example for *M* = 1 in the following subsection.

The computational complexity of the algorithm is as follows (for *M* = 1). Computing the auto-correlations of the received signal requires about *L Ns* multiplications and additions. Computing *T*1(*Ns*) and *T*2(*Ns*)requires about *L*<sup>2</sup> additions. Therefore, the total number of multiplications and additions is about *L Ns* + *L*2.

#### *3.4.2 Detection Probability and Threshold Determination*

It is generally difficult to find closed-form detection probabilities. For this purpose, we need to find the distribution of test statistics. In [65, 76, 77], approximations for the distribution of the test statistics has been found by using central limit theorem for *M* = 1. Furthermore, the theoretical estimations for the two probabilities, *Pd* , *Pf a*, as well as the threshold associated with these probabilities, were also discussed. Here we summarize the results as follows.

In the following, we consider the case of *M* = 1. Denote *cnm* as the element of sample covariance matrix **R***<sup>x</sup>* (*Ns*) at the *n*th row and *m*th column, and let

$$T\_1(N\_s) = \frac{1}{L} \sum\_{n=1}^{L} \sum\_{m=1}^{L} |c\_{nm}| \tag{3.91}$$

$$T\_2(N\_s) = \frac{1}{L} \sum\_{n=1}^{L} |c\_{nn}|\tag{3.92}$$

The test statistic of the CAVD is then *TC AV D* = *T*1(*Ns*)/*T*2(*Ns*).

It is shown in [65, 76, 77] that

$$\lim\_{N\_l \to \infty} \mathcal{E}(T\_l(N\_s)) = \sigma\_s^2 + \sigma\_\eta^2 + \frac{2\sigma\_s^2}{L} \sum\_{l=1}^{L-1} (L-l)|\alpha\_l| \tag{3.93}$$

where

$$\alpha\_l = \operatorname{E}[s(n)s(n-l)]/\sigma\_s^2 \tag{3.94}$$

σ2 *<sup>s</sup>* is the signal power, σ<sup>2</sup> *<sup>s</sup>* = E[*s*<sup>2</sup>(*n*)]. |α*l*| defines the correlation strength among the signal samples, here 0 |α*l*| 1. For simplicity, we denote

$$\Upsilon\_L \triangleq \frac{2}{L} \sum\_{l=1}^{L-1} (L-l)|\alpha\_l| \tag{3.95}$$

which is the overall correlation strength among the consecutive *L* samples. When there is no signal, we have

$$(T\_1(N\_s)/T\_2(N\_s) \approx \mathbb{E}(T\_1(N\_s))/\mathbb{E}(T\_2(N\_s)) = 1 + (L-1)\sqrt{\frac{2}{\pi N\_s}}\tag{3.96}$$

Note that this ratio approaches to 1 as *Ns* approaches to infinite. Also note that the ratio is not related to the noise power (variance). On the other hand, when there is signal (signal plus noise case), we have

$$\begin{split} T\_1(N\_s) / T\_2(N\_s) &\approx \mathcal{E}(T\_1(N\_s)) / \mathcal{E}(T\_2(N\_s)) \\ &\approx 1 + \frac{\sigma\_s^2}{\sigma\_s^2 + \sigma\_\eta^2} \Upsilon\_L = 1 + \frac{\text{SNR}}{\text{SNR} + 1} \Upsilon\_L \end{split} \tag{3.97}$$

Here the ratio approaches to a number larger than 1 as *Ns* approaches to infinite. The number is determined by the correlation strength among the signal samples and the SNR. Hence, for any fixed SNR, if there are sufficiently large number of samples, we can always differentiate if there is signal or not based on the ratio.

However, in practice we have only limited number of samples. So, we need to evaluate the performance at fixed *Ns*.

First we analyze the *Pf a* at hypothesis H0. For given threshold γ1, the probability of false alarm for the CAVD algorithm is

$$P\_{fa} = P\left(T\_1(N\_s) > \chi\_1 T\_2(N\_s)\right) \approx P\left(T\_2(N\_s) < \frac{1}{\chi\_1} \left(1 + (L-1)\sqrt{\frac{2}{N\_s \pi}}\right) \sigma\_\eta^2\right)$$

$$= P\left(\frac{T\_2(N\_s) - \sigma\_\eta^2}{\sqrt{\frac{2}{N\_s}} \sigma\_\eta^2} < \frac{\frac{1}{\eta} \left(1 + (L-1)\sqrt{\frac{2}{N\_s \pi}}\right) - 1}{\sqrt{2/N\_s}}\right)$$

$$\approx 1 - Q\left(\frac{\frac{1}{\eta} \left(1 + (L-1)\sqrt{\frac{2}{N\_s \pi}}\right) - 1}{\sqrt{2/N\_s}}\right) \tag{3.98}$$

where

$$\mathbf{Q}(t) = \frac{1}{\sqrt{2\pi}} \int\_{t}^{+\infty} e^{-\mu^2/2} \mathrm{d}u \tag{3.99}$$

For a given *Pf a*, the associated threshold should be chosen such that

$$\frac{\frac{1}{N\_l}\left(1+(L-1)\sqrt{\frac{2}{N\_r\pi}}\right)-1}{\sqrt{2/N\_s}}=-\mathbf{Q}^{-1}(P\_{fa})\tag{3.100}$$

That is,

$$\gamma\_1 = \frac{1 + (L - 1)\sqrt{\frac{2}{N\_s \pi}}}{1 - \mathbf{Q}^{-1}(P\_{fa})\sqrt{\frac{2}{N\_s}}} \tag{3.101}$$

Note that here the threshold is not related to noise power and SNR. After the threshold is set, we now calculate the probability of detection at various SNR. For the given threshold γ1, when signal presents,

$$P\_d = P\left(T\_1(N\_s) > \gamma\_1 T\_2(N\_s)\right) = P\left(T\_2(N\_s) < \frac{1}{\gamma\_1} T\_1(N\_s)\right)$$

$$\approx P\left(T\_2(N\_s) < \frac{1}{\gamma\_1} \mathbb{E}(T\_1(N\_s))\right)$$

$$= P\left(\frac{T\_2(N\_s) - \sigma\_s^2 - \sigma\_\eta^2}{\sqrt{\text{Var}(T\_2(N\_s))}} < \frac{\frac{1}{\gamma\_1} \mathbb{E}(T\_1(N\_s)) - \sigma\_s^2 - \sigma\_\eta^2}{\sqrt{\text{Var}(T\_2(N\_s))}}\right)$$

$$= 1 - Q\left(\frac{\frac{1}{\gamma\_1} \mathbb{E}(T\_1(N\_s)) - \sigma\_s^2 - \sigma\_\eta^2}{\sqrt{\text{Var}(T\_2(N\_s))}}\right) \tag{3.102}$$

For very large *Ns* and low SNR, we have

$$\text{Var}(T\_2(N\_s)) \approx \frac{2\sigma\_\eta^2}{N\_s} \left(2\sigma\_s^2 + \sigma\_\eta^2\right) \approx \frac{2(\sigma\_s^2 + \sigma\_\eta^2)^2}{N\_s} \tag{3.103}$$

and

$$\mathbb{E}(T\_1(N\_s)) \approx \sigma\_s^2 + \sigma\_\eta^2 + \sigma\_s^2 \Upsilon\_L \tag{3.104}$$

Hence, we have a further approximation

$$P\_d \approx 1 - \mathbf{Q} \left( \frac{\frac{1}{\gamma\_l} + \frac{\gamma\_L \sigma\_r^2}{\gamma\_l (\sigma\_r^2 + \sigma\_q^2)} - 1}{\sqrt{2/N\_s}} \right) = 1 - \mathbf{Q} \left( \frac{\frac{1}{\gamma\_l} + \frac{\gamma\_L \text{SNR}}{\gamma\_l (\text{SNR} + 1)} - 1}{\sqrt{2/N\_s}} \right) \tag{3.105}$$

Obviously, the *Pd* increases with the number of samples, *Ns*, the SNR and the correlation strength among the signal samples. Note that γ<sup>1</sup> is also related to *Ns* as shown above, and lim *Ns*→∞ <sup>γ</sup><sup>1</sup> <sup>=</sup> 1. Hence, for fixed SNR, *Pd* approaches to 1 when *Ns* approaches to infinite.

#### *3.4.3 Performance Analysis and Comparison*

To compare the performances of any methods, first we need a criterion. By properly choosing the thresholds, many methods can achieve any given *Pd* and *Pf a* > 0 if sufficiently large number of samples are available. The key point is how many samples are needed to achieve the given *Pd* and *Pf a* > 0. Hence, we choose this as the criterion to compare the two algorithms.

For a target pair of *Pd* and *Pf a*, based on (3.105) and (3.101), we can find the required number of samples for the CAVD as

$$N\_c \approx \frac{2\left(\mathbf{Q}^{-1}(P\_{fa}) - \mathbf{Q}^{-1}(P\_d) + (L-1)/\sqrt{\pi}\right)^2}{(\Upsilon\_L \text{SNR})^2} \tag{3.106}$$

For fixed *Pd* and *Pf a*, *Nc* is only related to the smoothing factor *L* and the overall correlation strength ϒ*<sup>L</sup>* . Hence, the best smoothing factor is

$$L\_{best} = \min\_{L} \{ N\_c \} \tag{3.107}$$

which is related to the correlation strength among the signal samples.

Here we give a comparison of the CBD with the energy detection. Energy detection simply compares the average power of the received signal with the noise power to make a decision. To guarantee a reliable detection, the threshold must be set according to the noise power and the number of samples [4–6]. On the other hand, the proposed methods do not rely on the noise power to set the threshold (see Eq. (3.101)), while keeping other advantages of the energy detection. Simulations have shown that the proposed method is much better than the energy detection when noise uncertainty is present [65, 76, 77]. Hence, here we only compare the proposed method with the ideal energy detection (assume that noise power is known exactly).

For energy detection, the required number of samples is approximately [5]

$$N\_e = \frac{2\left(\mathbf{Q}^{-1}(P\_{fa}) - \mathbf{Q}^{-1}(P\_d)\right)^2}{\text{SNR}^2} \tag{3.108}$$

Comparing (3.106) and (3.108), if we want *Nc* < *Ne*, we need

$$\Upsilon\_L > 1 + \frac{L - 1}{\sqrt{\pi} \left( \mathbf{Q}^{-1} (P\_{fa}) - \mathbf{Q}^{-1} (P\_d) \right)} \tag{3.109}$$

For example, if *Pd* <sup>=</sup> <sup>0</sup>.9 and *Pf a* <sup>=</sup> <sup>0</sup>.1, we need <sup>ϒ</sup>*<sup>L</sup>* <sup>&</sup>gt; <sup>1</sup> <sup>+</sup> *<sup>L</sup>*−<sup>1</sup> <sup>4</sup>.<sup>54</sup> . In conclusion, if the signal samples are highly correlated such that (3.109) holds, the CAVD is better than the ideal energy detection; otherwise, the ideal energy detection is better.

In terms of the computational complexity, the energy detection needs about *Ns* multiplications and additions. Hence, the computational complexity of the proposed methods is about *L* times that of the energy detection.

#### **3.5 Cooperative Spectrum Sensing**

When there are multiple secondary users/receivers distributed at different locations, it is possible for them to cooperate to achieve higher sensing reliability. There are various sensing cooperation schemes in the current literature [28, 29, 41, 79–92]. In general, these schemes can be classified into two categories: (A) Data fusion: each user sends its raw data or processed data to a specific user, which processes the data collected and then makes the final decision; and (B) Decision fusion: multiple users process their data independently and send their decisions to a specific user, which then makes the final decision.

#### *3.5.1 Data Fusion*

Theoretically, the LRT based on the multiple sensors is the best. However, there are two major difficulties in using the optimal LRT based method: (1) it needs the exact distribution of **x**, which is related to the source signal distribution, the wireless channels, and the noise distribution; (2) it may needs the raw data from all sensors, which is very expensive for practical applications.

In some situations, the signal samples are independent in time, that is, E(*si*(*n*) *si*(*m*)) = 0, for *n* = *m*. If we further assume that the noise and signal samples have Gaussian distribution, i.e., *η*(*n*) ∼ N(**0**, **R**η) and **s**(*n*) ∼ N(**0**, **R***s*), where

$$\mathbf{R}\_s = \mathbf{E}(\mathbf{s}(n)\mathbf{s}^T(n)), \ \mathbf{R}\_\eta = \mathbf{E}(\eta(n)\eta^T(n))\tag{3.110}$$

the LRT can be obtained explicitly as [89]

$$\log T\_{LRT} = \frac{1}{N} \sum\_{n=0}^{N-1} \mathbf{x}^T(n) \mathbf{R}\_{\eta}^{-1} \mathbf{R}\_s (\mathbf{R}\_s + \mathbf{R}\_{\eta})^{-1} \mathbf{x}(n) \tag{3.111}$$

Note that in general the cross-correlations among the signals from different sensors are used in the detection here. It means that the fusion center needs the raw data from all sensors, if the signals from different sensors are correlated in space. The reporting of the raw data is very expensive for practical applications.

If the sensors are distributed at different locations and far apart, the primary signal will very likely arrive at different sensors at different times. *That is, in* (3.3) τ*ik may be different for different i*. For example, assuming that we are sensing a channel with 6MHz bandwidth with sampling rate 6MHz, delay of one data sample approximately equals to 50 m distance. In a large size network like a 802.22 cell (typically with radius 30 km), the distance differences of different sensors to the primary user could be as large as several kilo-meters. Therefore, the relative time delays τ*ik* can be as large as 20 samples or more. If the delays are different, the signals at the sensors will be independent in space.

#### 3.5 Cooperative Spectrum Sensing 71

For distributed sensors, their noises are independent in space. If we aim for sensing at very low SNR, the received signal at a sensor will be dominated by noise. Hence even if the primary signals at different sensors may be weakly correlated, the whole signals (primary signals plus noises) can be treated approximately as independent in space at low SNR. So, in the following, we further assume that E(*si*(*n*)*sj*(*n*)) = 0, for *i* = *j*.

Under the assumptions we have

$$\mathbf{R}\_{\eta} = \text{diag}(\sigma\_{\eta,1}^2, \dots, \sigma\_{\eta,M}^2) \tag{3.112}$$

$$\mathbf{R}\_s = \text{diag}(\sigma\_{s,1}^2, \dots, \sigma\_{s,M}^2) \tag{3.113}$$

where σ<sup>2</sup> η,*<sup>i</sup>* = E(|η*i*(*n*)| <sup>2</sup>) and σ<sup>2</sup> *<sup>s</sup>*,*<sup>i</sup>* = E(|*si*(*n*)| <sup>2</sup>). Under the assumptions, we can express the LRT equivalently as

$$\log T\_{LRT} = \frac{1}{N} \sum\_{n=0}^{N-1} \sum\_{i=1}^{M} \frac{\sigma\_{s,i}^2}{\sigma\_{\eta,i}^2 (\sigma\_{s,i}^2 + \sigma\_{\eta,i}^2)} \left| \mathbf{x}\_i(n) \right|^2 = \sum\_{i=1}^{M} \frac{\wp\_i}{1 + \wp\_i} T\_{ED,i} \tag{3.114}$$

where

$$T\_{ED,i} = \frac{1}{N\sigma\_{\eta,i}^2} \sum\_{n=0}^{N-1} |\mathbf{x}\_i(n)|^2 \tag{3.115}$$

and γ*<sup>i</sup>* = σ<sup>2</sup> *<sup>s</sup>*,*<sup>i</sup>* /σ<sup>2</sup> η,*i* .

Note that *TE D*,*<sup>i</sup>* is the normalized energy at sensor *i*. The LRT is simply a linearly combined (LC) cooperative sensing. This method is also called cooperative energy detection (CED), which combines the energy from different sensors to make a decision. Thus there are three assertions for cooperative sensing by distributed sensors with time independent signals:


If the signals are time dependent, the derivation of the LRT becomes much more difficult. Furthermore, the information of correlation among the signal samples is required. There have been methods to exploit the time and space correlations of the signals in a multi-antenna system [14]. If the raw data from all sensors are sent to the fusion center, the sensor network may be treated as a single multi-antenna system (virtual multi-antenna system). If the fusion center does not have the raw data, how to fully use the time and space correlations is still an open question, though there have been some sub-optimal methods. For example, a fusion scheme based on the CAVD is given in [87], which has the capability to mitigate interference and noise uncertainty.

A major difficulty in implementing the method is that the fusion center needs to know the SNR at each user. Also the decision and threshold are related to the SNR's, which means that the detection process changes dynamically with the signal strength and noise power.

If *P* = 1, the propagation channels are flat-fading (*qik* = 0, ∀*i*, *k*), and τ*ik* = 0, ∀*i*, *k*, the signal at different antennas can be coherently combined first and then the energy detection is used [28, 31, 93]. The method is called maximum ratio combined (MRC) cooperative energy detection.

$$T\_{MRC} = \frac{1}{N} \sum\_{n=0}^{N-1} |\sum\_{i=1}^{M} h\_i x\_i(n)|^2 \tag{3.116}$$

It is optimal if the noise powers at different sensors are equal. Note that the MRC needs the *raw data* from all sensors and also the channel information.

We have proved that the LRT is actually a LC scheme. It is natural to also consider other LC schemes. In general, a LC scheme simply sums the weighted energy values to obtain the following test statistic

$$T\_{LC} = \sum\_{i=1}^{M} \mathbf{g}\_i T\_{ED,i} \tag{3.117}$$

where *gi* is the combining coefficient with *gi* ≥ 0. If we allow the combining coefficients to depend on the SNRs of sensors, we know that the *optimal sensing* should choose *gi* = γ*<sup>i</sup>* /(1 + γ*i*). So the problem is how we design a LC scheme that does not need the SNR information or only uses partially available SNR information, while its performance does not degrade much.

One such scheme is the equal gain combine (EGC) [14, 28, 83, 84, 93, 94], i.e., *gi* = 1/*M* for all *i*:

$$T\_{EGC} = \frac{1}{M} \sum\_{i=1}^{M} T\_{ED,i} \tag{3.118}$$

EGC totally ignores the differences of sensors.

If the normalized signal energies at different sensors have large differences, a natural way is to choose the largest normalized energy for detection. We call this maximum normalized energy (MNE) cooperative sensing. The test statistic is

$$T\_{MNE} = \max\_{1 \le i \le M} T\_{ED,i} \tag{3.119}$$

Note that this is different from the method that uses the known sensor with the largest normalized signal energies. The largest normalized energy may not always be at the same sensor due to the dynamic changes of wireless channels. The method is equivalent to the "OR decision rule" [79, 86].

There have been many researches for the "selective energy detection". Such methods select the "optimal" sensor to do the sensing based on different criterias [41, 90–92, 95–98].

#### *3.5.2 Decision Fusion*

In decision fusion, each sensor sends its one-bit (hard decision) or multiple-bit decision (soft-decision) to a central processor that deploys a fusion rule to make the final decision.

Let us consider the case of hard decision: sensor *i* sends its decision bit *ui* ("1" for signal present and "0" for signal absent) to the fusion center. Let *u* be the vector formed from *ui* . The test statistic of the optimal fusion rule is thus the LRT [79]:

$$T\_{DFLRT} = \frac{p(\boldsymbol{\mu}|\mathcal{H}\_{\rm l})}{p(\boldsymbol{\mu}|\mathcal{H}\_{\rm l})} \tag{3.120}$$

Assuming that the sensors are independent, we have

$$T\_{DFLRT} = \prod\_{i=1}^{M} \frac{p(\mu\_i | \mathcal{H}\_1)}{p(\mu\_i | \mathcal{H}\_0)} \tag{3.121}$$

Let *A*<sup>1</sup> be the set of *i* such that *ui* = 1 and *A*<sup>0</sup> be the set of *i* such that *ui* = 0. The above expression can be rewritten as

$$T\_{DFLRT} = \prod\_{i \in A\_1}^{M} \frac{P\_{d,i}}{P\_{fa,i}} \prod\_{A\_0}^{M} \frac{1 - P\_{d,i}}{1 - P\_{fa,i}} \tag{3.122}$$

where *Pd*,*<sup>i</sup>* and *Pf a*,*<sup>i</sup>* are the probability of detection and probability of false alarm for user *i*, respectively. Taking logarithm, we obtain

$$\log T\_{DFLRT} = \sum\_{i \in A\_1}^{M} \log \frac{P\_{d,i}}{P\_{fa,i}} \sum\_{A\_0}^{M} \log \frac{1 - P\_{d,i}}{1 - P\_{fa,i}} \tag{3.123}$$

By ignoring some constants not related to *ui* , the expression can be rewritten as

$$\log T\_{DFLRT} = \sum\_{i=1}^{M} u\_i \log \frac{P\_{d,i}(1 - P\_{fa,i})}{P\_{fa,i}(1 - P\_{d,i})} \tag{3.124}$$

The test statistic is a weighted linear combination of the decisions from all sensors. The weight for a particular sensor reflects its reliability, which is related to the status of the sensor (for example, signal strength, noise power, channel response, and threshold).

If all sensors have the same status and choose the same threshold, the weights are equal and therefore the LRT is equivalent to the popular "*K* out of *M*" rule: if and only if *K* decisions or more are "1"s, the final decision is "1". This includes "Logical-OR (LO)" (*<sup>K</sup>* <sup>=</sup> 1), "Logical-AND (LA)" (*<sup>K</sup>* <sup>=</sup> *<sup>M</sup>*) and "Majority" (*<sup>K</sup>* = *<sup>M</sup>* <sup>2</sup> ) as special cases [79]. Let the probability of detection and probability of false alarm of the method are respectively

$$P\_d = \sum\_{i=K}^{M} \binom{M}{i} \left(1 - P\_{d,i}\right)^{M-i} P\_{d,i}^i \tag{3.125}$$

and

$$P\_{fa} = \sum\_{i=K}^{M} \binom{M}{i} \left(1 - P\_{fa,i}\right)^{M-i} P\_{fa,i}^{i}.\tag{3.126}$$

While the Neyman–Pearson Theorem tells us that "*K* out of *M*" rule is optimal (for equal sensors case), it does not stipulate how to choose the threshold *th* and *K*. In general, to get the best threshold *th* and *K* we need to solve some optimization problems for different purpose.

If each user can send multiple-bit decision to the fusion center, a more reliable decision can be made. A fusion scheme based on multiple-bit decisions is shown in [29]. In general, there is a tradeoff between the number of decision bits and the fusion reliability. There are also other fusion rules that may require additional information [79, 99].

#### *3.5.3 Robustness of Cooperative Sensing*

Let the noise uncertainty factor of sensor *i* be α*<sup>i</sup>* . Assume that all sensors have the same noise uncertainty bound. For the linear combination, the expectation of noise power in *TLC* is therefore

$$
\sigma\_{LC}^2 = \sum\_{i=1}^M \mathbf{g}\_i \hat{\sigma}\_{\eta}^2 / \alpha\_i = \hat{\sigma}\_{\eta}^2 \sum\_{i=1}^M \mathbf{g}\_i / \alpha\_i \tag{3.127}
$$

Hence, the noise uncertainty factor for LC fusion is α*LC* = 1/ *<sup>M</sup> <sup>i</sup>*=<sup>1</sup>(*gi*/α*i*). Note that α*<sup>i</sup>* and 1/α*<sup>i</sup>* are limited in [10−*B*/<sup>10</sup>, 10*<sup>B</sup>*/<sup>10</sup>] and have the same distribution. Hence α*LC* is also limited in [10−*B*/<sup>10</sup>, 10*<sup>B</sup>*/<sup>10</sup>]. EGC is a special case of LC. Based on the well-known central limit theorem (CLT), it is easy to verify the following theorem for EGC [56].

**Fig. 3.4** ROC curve for data fusion: *N* = 5000, μ = −15 dB, 20 sensors

**Theorem 3.2** *Assume that all sensors have the same noise uncertainty bound B and their noise uncertainty factors are independent. As M goes to infinite, the noise uncertainty factor of EGC* α*LC converges in probability to a deterministic number* <sup>1</sup>/E(α*i*) <sup>=</sup> log(10)*<sup>B</sup>* <sup>5</sup>(10*B*/10−10−*B*/10)*, that is, for any* > <sup>0</sup>*,*

$$P\left(|\alpha\_{LC} - 1/\mathcal{E}(\alpha\_i)| > \epsilon\right) = 0\tag{3.128}$$

It means that, as *M* approaches to infinite, there is no noise uncertainty for EGC fusion rule. We can prove similar result for some other data fusion rules. Hence, data fusion does reduce the noise uncertainty impact. For example, at *N* = 5000 and SNR μ = −15 dB, the ROC curve for 20 sensors is shown in Fig. 3.4.

Although cooperative sensing can achieve better robustness and performance, there are some issues associated with it. First, additional bandwidth is required to exchange information among the cooperating users. In an ad-hoc network, this is by no means a simple task. Second, the information exchange may induce errors, which may have a major impact on fusion performance.

#### *3.5.4 Cooperative CBD and EBD*

As shown in the last sections, CBD and EBD are robust sensing methods that are immune to noise uncertainty. Thus it is interesting to use it for cooperative sensing as well. In [87], methods were proposed to use the CBD and EBD for cooperative sensing. Here we give a brief review of the methods.

It is assumed that there are *M* ≥ 1 sensors/receivers in a network. The sensors are distributed in different locations so that their local environments are different and independent. Each sensor has only one antenna. Other than the previous model, here we consider that the received signal may be contaminated by interference. There are two hypothesizes: H<sup>0</sup> and H1, which corresponds to signal absent or present, respectively. The received signal at sensor/receiver *i* and time *n* is given as

$$\mathcal{H}\_0: \ x\_i(n) = \rho\_i(n) + \eta\_i(n) \tag{3.129}$$

$$\mathcal{H}\_1: \ x\_i(n) = h\_i \mathbf{s}(n - \tau\_i) + \rho\_i(n) + \eta\_i(n) \tag{3.130}$$

Here ρ*i*(*n*) is the interference (like spurious signals) to sensor *i*, which may be emitted from other electronic devices due to non-linear Analog-to-Digital Converters (ADC) or from other intentional/un-intentional transmitters. Note that interferences to different sensors could be different due to their location differences. η*i*(*n*) is the Gaussian white noise to receiver *i*. *s*(*n*) is the primary user's signal and *hi* is the propagation channel from the primary to receiver *i*. τ*<sup>i</sup>* is the relative time delay of the primary signal reaching sensor *i*. Note that primary signal may reach different sensors at different times due to their location differences. In the following we consider baseband processing and assume that the signal, noise and channel coefficients are complex numbers.

#### **3.5.4.1 The Methods**

Let the auto-correlation of the signal be

$$\hat{r}\_{l}(l) = \mathcal{E}(\mathbf{x}\_{l}(n)\mathbf{x}\_{i}^{\*}(n-l)), \ l = 0, 1, \ldots, L-1 \tag{3.131}$$

where *L* is the number of lags. Then, at hypothesis H0,

$$
\hat{r}\_i(l) = \hat{r}\_{\rho,i}(l) + \hat{r}\_{\eta,i}(l) \tag{3.132}
$$

where

$$
\hat{r}\_{\rho,i}(l) = \operatorname{E}(\rho\_i(n)\rho\_i^\*(n-l))\tag{3.133}
$$

$$
\hat{r}\_{\eta,i}(l) = \operatorname{E}(\eta\_i(n)\eta\_i^\*(n-l))\tag{3.134}
$$

Since η*i*(*n*) are white noise samples, we have

$$
\hat{r}\_{\eta,i}(0) = \sigma\_{\eta,i}^2, \; \hat{r}\_{\eta,i}(l) = 0, l > 0 \tag{3.135}
$$

where σ<sup>2</sup> η,*<sup>i</sup>* is the expected noise power at sensor *i*. At hypothesis H1, we have

$$
\hat{r}\_i(l) = |h\_i|^2 \hat{r}\_s(l) + \hat{r}\_{\rho,i}(l) + \hat{r}\_{\eta,i}(l) \tag{3.136}
$$

where

$$
\hat{r}\_s(l) = \mathbf{E}(\mathbf{s}(n)\mathbf{s}^\*(n-l))\tag{3.137}
$$

In practice, there are only limited number of samples at each sensor. Let *N* be the number of samples. Then the auto-correlations can only be estimated by the sample auto-correlations defined as

$$r\_l(l) = \frac{1}{N} \sum\_{n=0}^{N-1} \chi\_l(n) \mathbf{x}\_l^\*(n-l), \ l = 0, 1, \dots, L-1 \tag{3.138}$$

It is known that*ri*(*l*) approaches to *r*ˆ*i*(*l*)if *N* is large. Each sensor computes its sample auto-correlations*ri*(*l*) and then sends them to a fusion center (the fusion center could be one of the sensor). The fusion center first averages the received auto-correlations, that is, compute

$$r(l) = \frac{1}{M} \sum\_{i=1}^{M} r\_i(l) \tag{3.139}$$

Then the covariance based detection (CBD) in [65] is used for the detection. Let

$$T\_1 = \sum\_{l=0}^{L-1} \mathbf{g}(l)|r(l)|, \ T\_2 = r(0) \tag{3.140}$$

where *g*(*l*) are positive weight coefficients and *g*(0) = 1. The decision statistic of the cooperative covariance based detection (CCBD) is

$$T\_{CCBD} = T\_1/T\_2\tag{3.141}$$

Let

$$
\hat{r}(l) = \frac{1}{M} \sum\_{i=1}^{M} \hat{r}\_i(l) \tag{3.142}
$$

Then *r*(*l*) approaches to *r*ˆ(*l*) for large simple size. At hypothesis H0,

$$\hat{r}(l) = \frac{1}{M} \sum\_{i=1}^{M} \hat{r}\_{\rho,i}(l) + \frac{1}{M} \sum\_{i=1}^{M} \hat{r}\_{\eta,i}(l) \tag{3.143}$$

At hypothesis H1,

78 3 Spectrum Sensing Theories and Methods

$$\hat{r}(l) = \left\{\frac{1}{M}\sum\_{i=1}^{M}|h\_i|^2\right\}\hat{r}\_s(l) + \frac{1}{M}\sum\_{i=1}^{M}\hat{r}\_{\rho,i}(l) + \frac{1}{M}\sum\_{i=1}^{M}\hat{r}\_{\eta,i}(l)\tag{3.144}$$

Therefore, at hypothesis H0,

$$T\_1/T\_2 \approx \frac{\sum\_{l=0}^{L-1} \lg(l) \left| \frac{1}{M} \sum\_{i=1}^{M} \hat{r}\_{\rho,i}(l) \right| + \frac{1}{M} \sum\_{i=1}^{M} \sigma\_{\eta,i}^2}{\frac{1}{M} \sum\_{i=1}^{M} \left( \hat{r}\_{\rho,i}(0) + \sigma\_{\eta,i}^2 \right)} \tag{3.145}$$

while at hypothesis H1,

$$T\_1/T\_2 \approx \frac{\sum\_{l=0}^{L-1} \lg(l) \left| \frac{1}{M} \sum\_{i=1}^{M} \left( |h\_i|^2 \hat{r}\_s(l) + \hat{r}\_{\rho,i}(l) \right) \right| + \frac{1}{M} \sum\_{i=1}^{M} \sigma\_{\eta,i}^2}{\frac{1}{M} \sum\_{i=1}^{M} \left( |h\_i|^2 \hat{r}\_s(0) + \hat{r}\_{\rho,i}(0) + \sigma\_{\eta,i}^2 \right)} \tag{3.146}$$

Unlike white noise, the interference may be correlated in time. Hence it is possible that *r*ˆρ ,*<sup>i</sup>*(*l*) = 0 for *l* > 0. However, if we assume that the interferences at different sensors are different and independently distributed, it is highly possible that <sup>1</sup> *M <sup>M</sup> <sup>i</sup>*=<sup>1</sup> *r*ˆρ ,*<sup>i</sup>*(*l*) (*l* > 0) will be small. This is proved in [87] for some special cases. Thus CCBD does improve the robustness to interference.

As long as the primary signal samples are time correlated, we have *T*1/*T*<sup>2</sup> > 1 at hypothesis H1. Hence, we can use *T*1/*T*<sup>2</sup> to differentiate hypothesis H<sup>0</sup> and H1. We summarize the cooperative covariance based detection (CCBD) as follows.

#### **Algorithm** Cooperative Covariance Based Detection

Step 1. Each sensor computes its sample auto-correlations*ri*(*l*),*l* = 0, 1,..., *L* − 1.

Step 2. Every sensor sends its sample auto-correlations to the fusion center.

Step 3. The fusion center computes the average of the sample auto-correlations of all sensors as described in (3.139).

Step 4. The fusion center computes two statistics *T*<sup>1</sup> and *T*<sup>2</sup> as described in (3.140). Step 5. Determine the presence of the signal based on *T*1, *T*<sup>2</sup> and a threshold γ . That is, if *T*1/*T*<sup>2</sup> > γ , signal exists; otherwise, signal does not exist. -

In Algorithm CCBD, a special choice for the weights is: *g*(0) = 1, *g*(*l*) = 2(*L* − *l*)/*L* (*l* = 1,..., *L* − 1). For this choice, it is equivalent to choose *T*<sup>1</sup> as the summation of absolute values of all the entries of matrix **R***<sup>x</sup>* , and *T*<sup>2</sup> as the summation of absolute values of all the diagonal entries of the matrix.

We can form the sample covariance matrix defined as

$$\mathbf{R}\_x = \begin{bmatrix} r(0) & \cdots & r(L-1) \\ \vdots & \vdots & \vdots \\ r^\*(L-1) \cdots & r(0) \end{bmatrix} \tag{3.147}$$

Based on the analysis above, at hypothesis H0, **R***<sup>x</sup>* is approximately an diagonal matrix, while at hypothesis H1, **R***<sup>x</sup>* is far from diagonal if the primary signal samples are time correlated.

Based on the sample covariance matrix, the eigenvalue based detections (EBD) discussed in the last sections can also be used here. We summarize the cooperative eigenvalue based detection (CEBD) as follows.

#### **Algorithm** Cooperative Eigenvalue Based Detection

Step 1–Step 3. Same as Algorithm CCBD.

Step 4. Form the sample covariance matrix and compute the maximum eigenvalue ζ*max* , and trace of the matrix **R***<sup>x</sup>* , denoted as *Tr*.

Step 5. Determine the presence of the signal based on ζ*max* and *Tr* and a threshold γ . That is, if ζ*max*/*Tr* > γ , signal exists; otherwise, signal does not exist. -

#### **3.5.4.2 Comparisons with Other Methods**

There have been extensive studies on cooperative sensing. Some of the methods have been discussed in Sect. 3.5.1. Among them, the cooperative energy detection (CED) is the most popular method. Here we choose the CED for comparison.

In general, ED needs to know the noise power. A wrong estimation of noise power will greatly degrade its performance [7, 47]. CED improves somewhat but still vulnerable to the noise power uncertainty as shown above. Furthermore, when unexpected interference presents, CED will treat it as signal and hence gives high probability of false alarm.

Compared with CED, advantages of CCBD/CEBD are: (1) as an inherent property of covariance and eigenvalue based detection [47, 65], CCBD/CEBD is robust to noise uncertainty; (2) due to the cancellation of auto-correlations at non-zeros lags, CCBD/CEBD is not sensitive to interferences; (3) it is naturally immune to wideband interference, since such interferences have very weak time correlations; (4) there is no need for noise power estimation at all which reduces implementation complexity.

Compared to single sensor covariance and eigenvalue based detections [47, 65], which may be affected by correlated interferences, CCBD/CEBD overcomes this drawback by cancelation of the adversary impact in the data fusion.

#### **3.6 Summary**

In this chapter, spectrum sensing techniques, including classical and newly-developed robust methods, have been reviewed in a systematic way. We start with the fundamental sensing theories from the optimal likelihood ratio test perspective, then review the classical methods including Bayesian method, robust hypothesis test, energy detection, matched filtering detection, and cyclostationary detection. After that, robust sensing methods, including eigenvalue based sensing method, and covariance based detection, are discussed in detail, which enhance the sensing reliability under hostile environment, Finally, cooperative spectrum sensing techniques are reviewed which improve the sensing performance through combining the test statistics or decision data from multiple senors. It is pointed out that this chapter only covers the basics of spectrum sensing, but there are many topics are not covered here, such as wideband spectrum sensing [100–103] and compressive sensing [104–107], interested readers are encouraged to refer to the relevant literatures.

#### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 4 Concurrent Spectrum Access**

**Abstract** Concurrent spectrum access (CSA), which allows different communication systems simultaneously transmit on the same frequency band, has been recognized as one of the most important techniques to realize the dynamic spectrum management (DSM). By regulating the interference to be received by primary users, the secondary users are able to gain continuous transmission opportunity. Without the need of frequent spectrum detection and reconfiguration, the CSA has the merit of low cost and easy implementation in practice. In this chapter, we will present some important CSA models, discuss the key problems existing in these CSA systems, and review the techniques to deal with these problems.

#### **4.1 Introduction**

Compared with the opportunistic spectrum access (OSA), in recent years, the concurrent spectrum access (CSA) has been attracting increasing interests from academia and industry [1, 2]. The main reason is three-fold. Firstly, the CSA allows one or multiple secondary users (SUs) simultaneously transmit on the primary spectrum, provided that the interference to the primary users (PUs) can be regulated. Thus, the SUs can transmission continuously regardless whether the PU is transmitting or not. Secondly, neither inquiry of geolocation database nor spectrum sensing is needed, and thus frequent spectrum reconfiguration can be avoided. This makes the cognitive device be with low-cost hardware, which is thus more easier to be deployed. Thirdly, the CSA can achieve higher area spectral efficiency due to its spatial reuse of spectrum [3, 4], and therefore, can be used to accommodate the dense wireless traffic in host-spot areas.

To enable CSA, the secondary transmitter (SU-Tx) needs to refrain the interference power produced to primary receiver (PU-Rx) by designing its transmit strategy, such as transmit power, bit-rate, bandwidth and antenna beam, according to the channel state information (CSI) of the primary and the secondary systems. Mathematically, the design problem can be formulated to optimize the secondary performance under the restrictions of the physical resource limitation of secondary system and the protection requirement of primary system. The physical resource constraint has been taken into consideration in the transmission design for the traditional communication system with dedicated operation spectrum [5–7]. While, the additional primary protection constraint poses new challenges to the design of both single-antenna and multi-antenna CSA systems.

According to whether the interference temperature is explicitly given, the primary protection constraint is rendered in two forms. When the interference temperature is given as a predefined value, the primary protection constraint can be explicitly expressed as *interference power constraint*. There are basically two types of interference power constraint which are known as *peak* interference power constraint and *average* interference power constraint [8]. Peak interference power constraint restricts the interference power levels for all the channel states, while the average interference power constraint regulates the average interference power across all the channel states. The peak interference power constraint is more stringent with which the PUs can be protected all the time. Thus, it is suitable for protecting the PUs with delay-sensitive services. The average interference power constraint is less stringent compared with the former one, since it allows the interference power exceed the interference temperature for some channel states. Thus, it is suitable to protect the PUs with delay-insensitive services. On the other hand, when the explicit interference temperature is unavailable, *primary performance loss constraint* is used to protect the PUs [9, 10]. In fact, this is a fundamental formulation of primary protection constraint, and can help the SUs to exploit the sharing opportunity more efficiently. However, this constraint requires the information including the CSI of the primary signal link and the transmit power of the PU, which is hard to be obtained in practice due to the lack of cooperation between the primary and secondary systems.

The research on the CSA system with SUs being equipped with single antenna mainly focuses on the analysis of secondary channel capacity. It has been shown that the capacity of secondary system with fading channel exceeds that with additive white Gaussian noise (AWGN) channel, under the interference power constraint [11]. The reason lies in that the fading channel with variation can provide more transmission opportunities for the secondary system. For flat-fading channel, the secondary channel capacity under the peak and the average interference power constraints are studied in [12], whereas the ergodic capacity and the outage capacity under various combinations of the peak/average interference power constraint and the peak/average transmit power constraint are studied in [13]. It shows that the capacity under the average power constraint outperforms that under the peak power constraint, since the former one can provide more flexibilities for the SU transmit power design. In [9], the ergodic capacity and the outage capacity under the PU-Rx outage constraint are analysed. It shows that to fulfill the same level of outage loss of PU-Rx, the SU can achieve larger transmission rate under the PU outage constraint. With zero outage loss permitted, the SU still achieves scalable transmit rate with the PU outage constraint. In [14], the primary channel information is exploited to further improve the secondary performance. To predict the interference power received by the PU-Rx, the CSI from the SU-Tx to the PU-Rx, which is referred to as cross channel state information (C-CSI), should be known by the SU-Tx. The mean secondary link capacity with imperfect knowledge of C-CSI is addressed in [15]. To protect the PU under imperfect C-CSI, it is shown that the interference temperature should be decreased, which thus leads to a decrement of secondary link capacity.

The use of multiple antennas provides both multiplexing and diversity gains in wireless transmissions [16, 17]. In particular, its function of co-channel interference suppression for multiuser transmission makes it a promising technique to enhance the CSA performance [18]. Generally speaking, multiple antennas can provide the SU-Tx in an CSA system more degrees of freedom in space, which can be split between the signal transmission to maximize the secondary transmit rate and the interference avoidance for the PUs. In [19], the multiple-input multiple-output (MIMO) channel capacity of the SU in a multi-antenna CSA system has been investigated. It shows that the primary protection constraint makes the methods proposed for the traditional MIMO system inapplicable for the CR transmit and receive design. Similar to the single-antenna CSA, moreover, the C-CSI is critical for the transmit design for interference avoidance in the multi-antenna CSA. In [20], it shows that when the effective interference channel can be perfectly estimated, the interference power received by the PUs can be perfectly avoided via cognitive beamforming. In [21], it further shows that the joint transmit and receive beamforming can effectively improve the secondary transmit rate by suppressing the interference produced by the PU-Tx. The use of multiple antennas also facilitates the multiple access and the broadcasting of secondary system [22]. Similar to the single-antenna case, due to the restriction of both transmit power and interference power, the transmit and receive design for the traditional multiple-access channel and the broadcasting channel in multi-antenna system is inapplicable and thus should be revisited [23, 24]. Moreover, the design for multi-antenna CSA should take into consideration the uncertainty in the estimated channel [25, 26] and the security issue [27, 28].

In the remainder of this chapter, we first present the single-antenna CSA system and discuss the optimal transmit power design under different types of power constraint to maximize the secondary channel capacity. Then, the multi-antenna CSA is discussed and the transceiver beamforming is presented under the condition of known and unknown related CSI. After that, the transmit and receive design for the cognitive multiple-access channel and the cognitive broadcasting channel are presented, which is followed by the discussion of robust design for the multi-antenna CSA. As an application of CSA in practice, the spectrum refarming technique is presented. Finally, the chapter is concluded with a summary.

#### **4.2 Single-Antenna CSA**

The simplest but most fundamental CSA system is comprised by a pair of SUs and a pair of PUs. Each of the terminals is equipped with single antenna. A single narrow frequency band is shared by the primary and secondary transmission. All the channels involved in the system are independent block fading (BF) channels. As shown in Fig. 4.1, *g*pp, *g*ps, *g*sp and *g*ss denote the instantaneous channel power gains from PU-Tx to PU-Rx, PU-Tx to SU-Rx, SU-Tx to SU-Rx, and SU-Tx to SU-Rx,

respectively. All the channel power gains are assumed to be independent with each other and be ergodic and stationary with continuous probability density function. In order to study the limit of the secondary channel capacity, we consider that the instantaneous channel power gains at each fading state are available at the SU-Tx. The AWGN at the PU-Rx and the SU-Rx is assumed to be independent circularly symmetric complex Gaussian variables with zero mean and variance *N*0. We consider that the PU-Tx is not aware of the coexistence of SU, and thus adopts fixed transmit power *Pp*. Note that in practice, the transmission of SU can be noticed by the PU since the interference power received by the PU is increased. To compensate its performance loss, the PU can increase its transmit power. Thus, rather than being fixed, the PU transmit power can be adaptive according the secondary transmission. This property has been utilized in the CR design for indirectly exploiting the primary system information [14].

#### *4.2.1 Power Constraints*

In this CR system, the SU-Tx needs to regulate its transmit power to protect the PU service. There are mainly two categories of power constraints, which are the transmit power constraint and the primary protection constraint.

#### (1) *Transmit Power Constraint*

This is a physical resource constraint that restricts the transmit power of the SU according to its power budget. Let ν = (*g*pp, *g*ps, *g*sp, *g*ss), and the SU transmit power under ν be *P*(ν). Given the maximum peak and average transmit power of the SU as *Ppk* and *Pav*, respectively, the transmit power constraint can be formulated as

$$P(\upsilon) \ge 0, \ \forall \upsilon \tag{4.1}$$

$$P(\upsilon) \le P\_{pk}, \ \forall \upsilon \tag{4.2}$$

$$\mathbb{E}[P(\nu)] \le P\_{av} \tag{4.3}$$

Equation (4.2) is known as the *peak transmit power constraint*, which is used to address the non-linearity of the power amplifier of SU. Equation (4.3) is known as the *average transmit power constraint*, which describes that the power consumption of the SU should be affordable in a long-term sense.

#### (2) *Primary Protection Constraint*

The transmission of SU is allowed only when the primary service can be well protected. Thus, the primary protection constraint should be properly formulated. This constraint also differentiates the CR design from the traditional one which is solely restricted by the physical resource constraint. Generally, there are two kinds of primary protection constraints:

• *Interference power constraint*: When the peak or average interference temperature, which are respectively denoted by *Qpk* and *Qav*, can be known by the SU-Tx, the primary protection constraint can be expressed as the interference power constraint, i.e.,

$$\mathcal{g}\_{\text{sp}}P(\nu) \le \mathcal{Q}\_{pk}, \ \forall \nu \tag{4.4}$$

$$\mathbb{E}[\mathcal{g}\_{\rm sp}P(\nu)] \le \mathcal{Q}\_{\rm av} \tag{4.5}$$

Equation (4.4) is known as the *peak interference power constraint*. It can be seen that the PU under this constraint can be fully protected at any fading status; thus, this constraint is suitable for protecting the delay-sensitive services. Equation (4.5) is known as the *average interference power constraint*. Since this constraint only protects the PU in a long-term sense, and there can be cases that the interference power exceeds the interference temperature at some fading states. Thus, it is suitable to protect the delay-insensitive services.

• *Primary performance loss constraint*: When the peak or average interference temperature is not available, the primary protection constraint can be formulated as

$$
\varepsilon\_p \le \varepsilon\_0,\tag{4.6}
$$

$$
\Delta r\_p \le \delta\_0, \ \forall \nu \tag{4.7}
$$

Equation (4.6) is known as the *PU outage constraint* [29], in which ε<sup>0</sup> denotes the target outage probability of the PU that should be maintained, and ε*<sup>p</sup>* is the outage probability of PU under the co-transmission of SU. Letting γ*<sup>p</sup>* be the target signal-to-interference-plus-noise ratio (SINR) of the PU, ε*<sup>p</sup>* can be derived as <sup>ε</sup>*<sup>p</sup>* <sup>=</sup> Pr - *g*pp *Pp <sup>g</sup>*sp *<sup>P</sup>*(ν)+*N*<sup>0</sup> < γ*<sup>p</sup>* . Equation (4.7) is known as the *primary rate loss constraint* [10], in which δ<sup>0</sup> is the maximum rate loss that is tolerable by the PU.

Note that in either (4.6) or (4.7), the primary system information, including *g*pp and *Pp* should be known by the SU-Tx. Such information can be transmitted from PU to SU if the cooperation between the two systems is available. When the inter-system cooperation is unavailable, the authors in [14] propose a scheme to allow the SU-Tx send probing signal which triggers the power adaptation of primary system. By doing so, the information of primary system can be exploited to improve the performance of secondary system.

Thus, the power constraints of the SU-Tx can be formulated as different combinations of the transmit power constraint and the primary protection constraint, i.e.,

$$\begin{aligned} \mathcal{F}\_1 &= \{ P(\upsilon) : (4.1), (4.2), (4.4) \} \\ \mathcal{F}\_2 &= \{ P(\upsilon) : (4.1), (4.2), (4.5) \} \\ \mathcal{F}\_3 &= \{ P(\upsilon) : (4.1), (4.3), (4.4) \} \\ \mathcal{F}\_4 &= \{ P(\upsilon) : (4.1), (4.3), (4.5) \} \\ \mathcal{F}\_5 &= \{ P(\upsilon) : (4.1), (4.2), (4.6) \} \\ \mathcal{F}\_6 &= \{ P(\upsilon) : (4.1), (4.3), (4.6) \} \\ \mathcal{F}\_7 &= \{ P(\upsilon) : (4.1), (4.2), (4.7) \} \\ \mathcal{F}\_8 &= \{ P(\upsilon) : (4.1), (4.3), (4.7) \} \end{aligned}$$

#### *4.2.2 Optimal Transmit Power Design*

The transmit power of the SU can be optimized to achieve different kinds of secondary channel capacity. Here, we discuss the optimization of the SU transmit power for maximizing the ergodic capacity and minimizing the outage capacity of secondary system under different power constraints, respectively.

#### (1) *Maximizing Ergodic Capacity*

The ergodic capacity of BF channels is defined as the achievable rate averaged over all the fading blocks. Note that the interference from the PU-Tx to the SU-Rx can be ignored or treated as AWGN, the ergodic capacity of the secondary system can be expressed as

$$C\_{\rm erg} = \mathbb{E}\left[\log\_2\left(1 + \frac{g\_{\rm ss}P(\nu)}{N\_0}\right)\right] \tag{4.8}$$

where the expectation is taken over ν. Then, the achievable ergodic capacity under different sets of power constraint can be formulated as

$$\max\_{P(v)\in\mathcal{F}\_i^{\tau}} C\_{\text{erg}}\tag{P4.1}$$

#### (2) *Minimizing Outage Capacity*

The outage capacity of BF channels is defined as the maximum rate that can be maintained over the fading blocks with a given outage probability. Equivalently, given the outage capacity of the secondary system, denoted by *r*0, the corresponding outage probability can be expressed as

$$p\_{\rm out} = \Pr\left\{ \log\_2 \left( 1 + \frac{\text{g}\_{\rm ss} P(\nu)}{N\_0} \right) < r\_0 \right\} \tag{4.9}$$

Thus, maximizing the outage capacity is equivalent to minimizing the outage probability given the target outage capacity, i.e.,

$$\min\_{P(v)\in\mathcal{F}\_l} p\_{\text{out}}\tag{P4-2}$$

Solving P4-1 and P4-2 gives the following observations.


#### **4.3 Cognitive Beamforming**

The use of multiple antennas in wireless communication can achieve beamforming gain. Specifically, receive beamforming can suppress interference, while transmit beamforming can avoid interference. By equipping multiple antennas, the SUs can jointly design the transmit precoding and transmit power to effectively balance between the interference avoidance to the PU and the performance optimization for the secondary link. Such a technique is known as cognitive beamforming (CB).

A model of CB is shown in Fig. 4.2, where an SU-Tx transmits signal to the SU-Rx by concurrently sharing the spectrum of primary system in which two PUs communicate with each other. The SU-Tx is required to be equipped with more than one antennas, and the other terminals can be equipped with one or multiple antennas. Let *M*1, *M*2, *M*st, and *M*sr be the number of antennas on PU1, PU2, SU-Tx and SU-Rx, respectively. The full-rank transmit beamforming matrix of PU*<sup>j</sup>* is denoted by **<sup>A</sup>***<sup>j</sup>* <sup>∈</sup> <sup>C</sup>*Mj*×*dj* where *<sup>j</sup>* ∈ {1, <sup>2</sup>}, *dj* denotes the corresponding number of transmit data streams and 1 ≤ *dj* ≤ *Mj* . Then, the transmit covariance matrix of PU*<sup>j</sup>* can be written as **S***<sup>j</sup>* = **A***j***A***<sup>H</sup> <sup>j</sup>* . The receive beamforming matrix of PU*<sup>j</sup>* is denoted by **<sup>B</sup>***<sup>j</sup>* <sup>∈</sup> <sup>C</sup>*dj*×*Mj* , where *<sup>j</sup>* ∈ {1, <sup>2</sup>}. The primary terminals are considered to be oblivious to the SUs, and treat the interference from the SU-Tx as additional noise. In the secondary system, the transmit beamforming matrix of the SU-Tx is denoted by the full-rank matrix **<sup>A</sup>***<sup>c</sup>* <sup>∈</sup> <sup>C</sup>*M*st×*dc* , where *dc* <sup>≤</sup> *<sup>M</sup>*st. Then, **<sup>S</sup>***<sup>c</sup>* <sup>=</sup> **<sup>A</sup>***c***A***<sup>H</sup> <sup>c</sup>* is the transmit covariance matrix of the SU-Tx. Finally, **<sup>H</sup>** <sup>∈</sup> <sup>C</sup>*M*sr×*M*st denotes the secondary signal channel matrix and **<sup>G</sup>***<sup>j</sup>* <sup>∈</sup> <sup>C</sup>*M*sr×*M*st denotes the matrix of interference channel from the SU-Tx to PU*<sup>j</sup>* .

#### *4.3.1 Interference Channel Learning*

The beamforming design, no matter at the receiver side or the transmitter side, heavily relies on channel matrix. The beamforming in the conventional multi-antenna system with dedicated spectrum is designed based on the signal channel matrix. However, the CB design needs the information of both the secondary signal channel matrix and the interference channel matrix from the SU-Tx to the PUs. The CB design with

**Fig. 4.2** A model of cognitive beamforming

perfect knowledge of interference channel matrix is studied in [30]. However, in a CSA network, the primary system is usually legacy that has been deployed and operating for a period of time. The primary and secondary systems can also belong to different operators. Therefore, although sharing the same spectrum, it is hard for the primary system to provide cooperation for the secondary system in terms of estimating and sending back the information of interference channel. Thus, the key problem for the practical CB is how to obtain the interference channel matrix at the SU-Tx.

To get some knowledge of the interference channel, a viable way is to allow the SU-Tx listen to the signal sent by the PUs before its own transmission, and estimate the channel from the PUs to the SU-Tx. Since the system operates at timedivision duplex (TDD) mode, the estimated channel can be treated as the interference channel from the SU-Tx to the PUs according to channel reciprocity. This process is referred to as *channel learning*. The learning-and-transmission protocol is illustrated in Fig. 4.3, in which *T* is the frame length, τ is the time duration used for learning the interference channel and the remainder *T* − τ is used for data transmission.

In the channel learning phase, the SU-Tx listens to the transmission of PUs on the spectrum of interest for *N* symbol periods. The received signal can be written as

$$\mathbf{y}(n) = \mathbf{G}\_j^H \mathbf{A}\_j \mathbf{x}\_j(n) + \mathbf{z}(n), \ n = 1, \ldots, N \tag{4.10}$$

where *j* = 1 indicates that the signal is transmitted from PU1; otherwise *j* = 2. The vector **x**(*n*) contains the encoded signals without power allocation and precoding. Then, the covariance matrix of the received signals at the SU-Tx can be derived as

$$\mathbf{Q}\_{\mathbf{y}} = \mathbb{E}[\mathbf{y}(n)(\mathbf{y}(n))^{H}] = \mathbf{Q}\_{s} + \rho\_{0}\mathbf{I} \tag{4.11}$$

where **Q***<sup>s</sup>* represents the covariance matrix due to the signals from the two PUs, and ρ0**I**is the variance matrix of AWGN noise. At the SU-Tx, only the sample covariance matrix can be obtained, i.e.,

$$\hat{\mathbf{Q}}\_{\mathbf{y}} = \frac{1}{N} \sum\_{n=1}^{N} \mathbf{y}(n) (\mathbf{y}(n))^{H} \tag{4.12}$$

Denote **Q**ˆ *<sup>s</sup>* as the estimation of **Q***<sup>s</sup>* that can be abstracted from **Q**ˆ *<sup>y</sup>* . The aggregate "effective" channel from both PUs to the SU-Tx can be derived as

$$\mathbf{G}\_{\rm eff}^{H} = \hat{\mathbf{Q}}\_{\rm s}^{1/2} \tag{4.13}$$

It should be noted that the channel which has been estimated is the so called *effective interference channel* (EIC) rather than the actual interference channel. This channel propagates interference to both of the PUs. Under the assumption of channel reciprocity, EIC from SU-Tx to both PUs can be denoted by **G**eff.

#### *4.3.2 CB with Perfect Channel Learning*

In this part, the transmit beamforming at the SU-Tx, including the transmit precoding and power allocation, under perfect learning of EIC is discussed. In the EIC learning, the noise effect on estimating**Q***<sup>s</sup>* based on**Q**ˆ *<sup>y</sup>* can be completely removed by choosing a large enough *N*, i.e., *N* → ∞.

To avoid the interference caused by the SU-Tx to both of the PUs, the precoding matrix of the SU-Tx should meet

$$\mathbf{G}\_{\text{eff}} \mathbf{A}\_{c} = \mathbf{0} \tag{4.14}$$

Denote *d*eff as the rank of **G**eff. The eigenvalue decomposition (EVD) of **Q***<sup>s</sup>* can be written as **<sup>Q</sup>***<sup>s</sup>* <sup>=</sup> **<sup>V</sup> <sup>V</sup>***<sup>H</sup>* , where **<sup>V</sup>** <sup>∈</sup> <sup>C</sup>*M*st×*d*eff and is a positive *<sup>d</sup>*eff <sup>×</sup> *<sup>d</sup>*eff diagonal matrix. Letting **<sup>U</sup>** <sup>∈</sup> <sup>C</sup>*M*st×(*M*st−*d*eff) satisfies **<sup>V</sup>***<sup>H</sup>* **<sup>U</sup>** <sup>=</sup> **<sup>0</sup>**, the transmit beamforming matrix of the SU-Tx can be written as

$$\mathbf{A}\_c = \mathbf{U} \mathbf{C}\_c^{1/2} \tag{4.15}$$

where **<sup>C</sup>**1/<sup>2</sup> *<sup>c</sup>* <sup>∈</sup> <sup>C</sup>(*M*st−*d*eff)×*dc* and *dc* denotes the number of transmit data streams of the SU-Tx. **C***<sup>c</sup>* satisfies **C***<sup>c</sup>* **0** and Tr(**C***c*) ≤ *Pt* , where *Pt* denotes the maximum transmit power of the SU-Tx. Equation (4.15) indicates that the design of transmit beamforming matrix for the CR channel is equivalent to the design of transmit covariance matrix **C***<sup>c</sup>* for an auxiliary multi-antenna channel, i.e., **HU**, subject to the transmit power constraint, i.e., Tr(**C***c*) ≤ *Pt* . This simplifies the design of **C***c*, since the existing solutions are available for this well-studied precoder design problem (see [31] and the references therein).

When the conditions **A***<sup>H</sup> <sup>j</sup>* **G***<sup>j</sup>* **B***j***G***j*, *j* ∈ {1, 2} hold,<sup>1</sup> and one or both of the PUs have multiple antennas but transmit only through a subspace of the overall spatial dimensions, i.e., *dj* < min{*M*1, *M*2}, the proposed CB scheme based on (4.15) outperforms the "P-SVD" scheme proposed in [30] where **G***j*'s are perfectly known by the SU-Tx, in terms of the achievable degree of freedom (DoF) of CR transmission. The reason lies in that the **G**eff contains the information of **A***<sup>H</sup> <sup>j</sup>* **G***<sup>j</sup>* . Based on the condition **A***<sup>H</sup> <sup>j</sup>* **G***<sup>j</sup>* **B***j***G***<sup>j</sup>* , **G**eff also contains the information of **B***j***G***<sup>j</sup>* . Thus, the propose scheme can have a strictly positive DoF even when *M*<sup>1</sup> + *M*<sup>2</sup> ≥ *M*st, provided that *d*<sup>1</sup> + *d*<sup>2</sup> < *M*st. In contrary, the **B***j***G***<sup>j</sup>* is unknown in the P-SVD scheme.

<sup>1</sup>**<sup>X</sup> <sup>Y</sup>** means that for two given matrices with the same column size, **<sup>X</sup>** and **<sup>Y</sup>**, if **Xe** <sup>=</sup> 0 for any arbitrary vector **e**, then **Ye** = 0 always holds.

Therefore, the DoF becomes zero when *M*<sup>1</sup> + *M*<sup>2</sup> ≥ *M*st. In most practical scenarios, it has (*d*<sup>1</sup> + *d*2) ≤ (*M*<sup>1</sup> + *M*2), and thereby the DoF gain achieved by the proposed scheme ((min(*M*st − *d*<sup>1</sup> − *d*2)+, *M*sr)) is always no less than the DoF achieved by the P-SVD ((min(*M*st − *M*<sup>1</sup> − *M*2)+, *M*sr)). Moreover, the maximum DoF is achieved when *d*<sup>1</sup> = *d*<sup>2</sup> = 0, i.e., the PU links are switched off.

#### *4.3.3 CB with Imperfect Channel Learning: A Learning-Throughput Tradeoff*

In this part, the CB with imperfect estimation of EIC due to finite sample size is discussed. With finite *N*, the noise effect on estimating **Q***<sup>s</sup>* cannot be removed, and thus error appears in the EIC estimation. Denote **G**ˆ eff as the estimated EIC with error. Recall the two-phase protocol given in Fig. 4.3. It can be seen that the number of sample size *N* increases as the learning duration τ increases. This improves the estimation accuracy of **G**ˆ eff, and therefore contributes to the CR throughput. However, increasing the learning duration will lead to a decrement of data transmission duration, which harms the CR throughput. Given that the overall frame length is limited by the delay requirement of the secondary service, there exists an optimal learning duration that maximizes the CR throughput. This is the so called *learning-throughput tradeoff* in the CB design.

To exploit the learning-throughput tradeoff, the optimization problem can be formulated as

$$\max\_{\mathbf{r}, \mathbf{C}\_c} \frac{T - \tau}{T} \log \left| \mathbf{I} + \mathbf{H} \hat{\mathbf{U}} \mathbf{C}\_c \hat{\mathbf{U}}^H \mathbf{H} / \rho\_1 \right| \tag{P4-3}$$
 
$$\text{s.t. } \operatorname{Tr}(\mathbf{C}\_c) \le J, \ \mathbf{C}\_c \succeq \mathbf{0}, \ 0 \le \tau \le T$$

where **U**ˆ is obtained from **G**ˆ eff, and *J* is the threshold that considers the interference power limit and the transmit power limit. In what follows, we present the imperfect estimation of EIC and the derivation of *J* .

#### (1) *Imperfect Estimation of EIC*

Since **G**ˆ eff depends solely on **Q**ˆ *<sup>s</sup>*, we derive **Q**ˆ *<sup>s</sup>* based on **Q**ˆ *<sup>y</sup>* , whose EVD is

$$
\hat{\mathbf{Q}}\_{\mathbf{y}} = \hat{\mathbf{T}}\_{\mathbf{y}} \hat{\mathbf{A}}\_{\mathbf{y}} \hat{\mathbf{T}}\_{\mathbf{y}}^H \tag{4.16}
$$

where ˆ *<sup>y</sup>* = Diag(λˆ <sup>1</sup>, λˆ <sup>2</sup>,..., λˆ *<sup>M</sup>*st) is the eigenvalue matrix of **Q**ˆ *<sup>y</sup>* . Then, we consider two cases:

• *With known noise power*: When the noise power ρ<sup>0</sup> is known, the estimation of **Q***<sup>s</sup>* based on the maximum likelihood criterion can be written as

$$\hat{\mathbf{Q}}\_{s} = \hat{\mathbf{T}}\_{\text{y}} \text{Diag}\left( (\hat{\lambda}\_{1} - \rho\_{0})^{+}, \dots, (\hat{\lambda}\_{M\_{\text{nl}}} - \rho\_{0})^{+} \right) \hat{\mathbf{T}}\_{\text{y}}^{H} \tag{4.17}$$

whose rank is *d*ˆ eff. The first *d*ˆ eff columns of **T**ˆ *<sup>y</sup>* give the estimate of **V**, and the last *M*st − *d*ˆ eff columns of **T**ˆ *<sup>y</sup>* give **U**ˆ . This will be used to design the CB precoding matrix.

• *With unknown noise power*: When the noise power ρ<sup>0</sup> is unknown, the noise power should be estimated along with **Q**ˆ *<sup>s</sup>*. By obtaining ρˆ0, *d*ˆ eff, **V**ˆ and **U**ˆ , the maximum likelihood estimate of **Q***<sup>s</sup>* can be derived as

$$\hat{\mathbf{Q}}\_{\rm s} = \hat{\mathbf{V}} \text{Diag}\left(\hat{\boldsymbol{\lambda}}\_{1} - \hat{\rho}\_{0}, \dots, \hat{\boldsymbol{\lambda}}\_{\hat{d}\_{\rm eff}} - \hat{\rho}\_{0}\right) \hat{\mathbf{V}}^{H} \tag{4.18}$$

which has the same structure with (4.17).

With **Q**ˆ *<sup>s</sup>* being derived, the estimate of EIC can be determined according to (4.13).

#### (2) *Interference Leakage to PUs*

Since the estimated EIC is imperfect, there will be interference power leaked to the PUs. Thus, the power constraint Tr(**C***c*) ≤ *J* should consider the interference leakage and transmit power limit simultaneously. Based on the CB design in (4.15) with **U** being replaced with **U**ˆ , the precoded transmit signal at the SU-Tx can be written as **<sup>s</sup>***c*(*n*) <sup>=</sup> **UC**<sup>ˆ</sup> <sup>1</sup>/<sup>2</sup> *<sup>c</sup>* **<sup>t</sup>***c*(*n*), *<sup>n</sup>* <sup>&</sup>gt; *<sup>N</sup>*. Then, the average interference leakage to PU*<sup>j</sup>* can be expressed as

$$I\_j = \mathbb{E}[\|\mathbf{B}\_j \mathbf{G}\_j \mathbf{s}\_c(n)\|^2] \tag{4.19}$$

The normalized interference leakage with respective to ρ0Tr(**B***j***B***<sup>H</sup> <sup>j</sup>* ) is then upper bounded by

$$\bar{I}\_j \le \frac{\text{Tr}(\mathbf{C}\_c)}{\alpha\_j N} \frac{\lambda\_{\text{max}}(\mathbf{G}\_j \mathbf{G}\_j^H)}{\lambda\_{\text{min}}(\mathbf{A}\_j^H \mathbf{G}\_j \mathbf{G}\_j^H \mathbf{A}\_j)} \tag{4.20}$$

where α*<sup>j</sup>* is defined as E *Nj N* , and *Nj* is the number of samples during the transmission of PU*<sup>j</sup>* . The upper bound of the average interference leakage in (4.20) has some interesting properties:


With the upper bound of interference leakage, the SINR of PU*<sup>j</sup>* , denoted by γ*<sup>j</sup>* , can be derived. Let γ = min *j*∈{1,2} {γ*j*}. The threshold *J* in the constraint of P4-3 can be derived as *<sup>J</sup>* <sup>=</sup> min (*Pt*,γτ ) with peak transmit power constraint, and *<sup>J</sup>* <sup>=</sup> min *<sup>T</sup> <sup>T</sup>*−<sup>τ</sup> *Pt*,γτ with average transmit power constraint.

After **U**ˆ and *J* are determined, P4-3 can be solved. It can be seen that by introducing learning phase before data transmission, the multi-antenna SU-Tx is able to estimate the interference channel information which is indispensable for interference control, and has a good balance between the interference avoidance and throughput maximization.

#### **4.4 Cognitive MIMO**

In this section, we exploit multi-antennas at the secondary terminals to effectively balance between the spatial multiplexing at the SU-Tx and the interference avoidance at the PUs. The main challenges to be addressed include:


The model of the cognitive multiple-input multiple-output (MIMO) system is shown in Fig. 4.4, where a pair of SUs shares the same spectrum with *K* primary users. The number of antennas of PU *k* is denoted by *Mk* , and the number of antennas of the SU-Tx and that of the SU-Rx are denoted by *M*st and *M*sr, respectively. The single-band frequency is shared by the primary and secondary systems. **<sup>H</sup>** <sup>∈</sup> <sup>C</sup>*M*st×*M*sr denotes the secondary signal channel matrix and **<sup>G</sup>***<sup>k</sup>* <sup>∈</sup> <sup>C</sup>*Mk*×*M*st denotes the interference channel matrix from the SU-Tx to PU*<sup>k</sup>* .

#### *4.4.1 Spatial Spectrum Design*

In this part, we discuss the spatial spectrum design for the SU-Tx to optimize the CR throughput and avoid the interference to the PUs. To exploit the performance limit, we consider that the channel matrices from the SU-Tx to the SU-Rx and that from the SU-Tx to each PU are perfectly known by the SU-Tx. Let **x**(*n*) be the transmit signal vector of the SU-Tx, which has been encoded and precoded. The received signal at the SU-Rx can be represented by

$$\mathbf{y}(n) = \mathbf{H}\mathbf{x}(n) + \mathbf{z}(n) \tag{4.21}$$

where **z**(*n*) is the AWGN vector with normalized variance **I**. Let **S** be the transmit covariance matrix of the secondary system. It has **<sup>S</sup>** <sup>=</sup> <sup>E</sup>[**x**(*n*)**x**(*n*)*<sup>H</sup>* ] where the expectation is taken over the codebook. Assuming that the ideal Gaussian codebook with infinitely large number of codeword symbols is used, it has **x**(*n*) ∼ CN(0, **S**), *n* = 1, 2,.... Then, by applying EVD, the transmit covariance matrix can be written as

$$\mathbf{S} = \mathbf{V}\boldsymbol{\Sigma}\mathbf{V}^H \tag{4.22}$$

where **<sup>V</sup>** <sup>∈</sup> <sup>C</sup>*M*st×*dc* is the precoding matrix with **VV***<sup>H</sup>* <sup>=</sup> **<sup>I</sup>**, and *dc* <sup>≤</sup> *<sup>M</sup>*st is the length of transmit data stream. *dc* is usually referred to as the degree of spatial multiplexing because it measures the number of transmit dimensions in the spatial domain. When *dc* = 1, the transmit strategy is known as beamforming, while when *dc* > 1, it is known as spatial multiplexing. The transmit power of the SU-Tx is limited by its power budget *Pt* . Thus, the transmit power constraint can be formulated as Tr(**S**) ≤ *Pt* . Letting **<sup>g</sup>***k*,*<sup>j</sup>* <sup>∈</sup> <sup>C</sup>1×*M*st be the channel vector from the SU-Tx to the *<sup>j</sup>*th receive antenna of the *k*th PU, it has**G***<sup>k</sup>* = **g***T <sup>k</sup>*,<sup>1</sup>,..., **g***<sup>T</sup> k*,*Mk T* . Then, two kinds of interference power constraint can be formulated:

• *Total interference power constraint*: If the total interference power received by all the receive antennas of each PU is limited, the interference power constraint can be formulated as

$$\operatorname{Tr}(\mathbf{G}\_k \mathbf{S} \mathbf{G}\_k^H) \le \mathcal{Q}\_k, \ k = 1, \dots, K \tag{4.23}$$

where *Qk* is the total interference temperature of PU*<sup>k</sup>* .

• *Individual interference power constraint*: If the individual interference power received by each antenna of the PU is limited, the interference power constraint can be formulated as

$$\mathbf{g}\_{k,j}\mathbf{S}\mathbf{g}\_{k,j}^H \le q\_k, \ j = 1, \dots, M\_k, \ k = 1, \dots, K \tag{4.24}$$

where *qk* is the individual interference temperature of PU*<sup>k</sup>* on each of its antennas.

Then, the problem that aims to maximize the secondary capacity by optimizing the spatial spectrum **S** of the SU-Tx can be formulated as

$$\max\_{\mathbf{S}} \quad \log\_2 \left| \mathbf{I} + \mathbf{H} \mathbf{S} \mathbf{H}^H \right| \tag{\mathbf{P4-4}}$$
 
$$\text{s.t. } \operatorname{Tr}(\mathbf{S}) \le P\_t$$
 
$$(4.23) \text{ or } (4.24)$$
 
$$\mathbf{S} \succeq \mathbf{0}$$

In what follows, we will discuss the solving of P4-4.

#### **4.4.1.1 One Single-Antenna PU**

When *K* = 1 and *Mk* = 1, there is only one single-antenna PU in the primary system. In this case, the channel from the SU-Tx to the PU is a multiple-input single-output (MISO) channel which can be represented as **<sup>g</sup>** <sup>∈</sup> <sup>C</sup>1×*M*st . Then, P4-4 can be simplified as

$$\max\_{\mathbf{S}} \quad \log\_2 \left| \mathbf{I} + \mathbf{H} \mathbf{S} \mathbf{H}^H \right| \tag{\mathbf{P4-5}}$$
 
$$\text{s.t. } \operatorname{Tr}(\mathbf{S}) \le P\_t$$
 
$$\mathbf{g} \mathbf{S} \mathbf{g}^H \le q$$
 
$$\mathbf{S} \succeq \mathbf{0}$$

where *q* denotes the interference temperature of the PU. To solve this problem, we consider the following two cases.

(1) *MISO Secondary Channel, i.e., M*sr = 1

In this case, **<sup>H</sup>** can be written as **<sup>h</sup>** <sup>∈</sup> <sup>C</sup><sup>1</sup>×*M*st , and the rank of **<sup>S</sup>** is one. This indicates that beamforming is optimal for the secondary transmission, and **S** can be written as **<sup>S</sup>** <sup>=</sup> **vv***<sup>H</sup>* , where **<sup>v</sup>** <sup>∈</sup> <sup>C</sup>*<sup>M</sup>*st<sup>×</sup>1. Then, P4-5 can be simplified as

$$\max\_{\mathbf{v}} \quad \log\_2 \left( 1 + \|\mathbf{h}\mathbf{v}\|^2 \right) \tag{P4-6}$$
 
$$\text{s.t.} \quad \|\mathbf{v}\|^2 \le P\_t$$
 
$$\|\mathbf{g}\mathbf{v}\|^2 \le q$$

#### (2) *MIMO Secondary Channel, i.e., M*sr > 1

In this case, the rank of **S** is larger than one, and thus spatial multiplexing is optimal instead of beamforming. In general, there is no closed-form solution of the optimal **S**. Thus, two suboptimal algorithms that achieve the closed-form solution of **S** are proposed as follows.

#### • **D-SVD**:

Direct-channel SVD (D-SVD) method applies singular value decomposition (SVD) to the secondary signal channel matrix, which can be expressed as **H** = **Q** <sup>1</sup>/2**U***<sup>H</sup>* . Thus, the precoding matrix **V** can be obtained as **V** = **U**. Let *Ms* = min{*M*st, *M*sr}. The optimal power allocation **p** = [*p*1,..., *Ms*] can be obtained by solving

$$\max\_{\mathbf{p}} \quad \sum\_{i=1}^{M\_t} \log\_2(1 + p\_i \lambda\_i) \tag{P4-7}$$
 
$$\text{s.t. } \sum\_{i=1}^{M\_t} p\_i \le P\_t$$
 
$$\sum\_{i=1}^{M\_t} \alpha\_i p\_i \le q$$
 
$$\mathbf{p} \succeq \mathbf{0}$$

where λ*<sup>i</sup>* is the diagonal element of , α*<sup>i</sup>* = **gu***i* <sup>2</sup> and **u***<sup>i</sup>* is the *i*th column of **U**. The problem is shown convex and the closed-form optimal *pi* is given by

$$p\_i = \left(\frac{1}{\nu + \alpha\_i \mu} - \frac{1}{\lambda\_i}\right)^+, \ i = 1, \ldots, M\_s \tag{4.25}$$

where ν and μ are the nonnegative dual variables associated with the transmit power constraint and the interference power constraint, respectively. Therefore, it can be seen that by using D-SVD method, the optimal power allocation for the MIMO secondary channel follows multi-level water-filling form.

#### • **P-SVD**:

Projected-channel SVD (P-SVD) method applies SVD to the projected channel of **H**, i.e., **H**<sup>⊥</sup> = **H**(**I** − **g**ˆ**g**ˆ *<sup>H</sup>* ) with **g**ˆ = **g***<sup>H</sup>* / **g** . Applying SVD to **H**<sup>⊥</sup> yields **H**<sup>⊥</sup> = **Q**<sup>⊥</sup> 1/2 <sup>⊥</sup> (**U**⊥)*<sup>H</sup>* . Thus, the precoding matrix **<sup>V</sup>** can be obtained as **<sup>V</sup>** <sup>=</sup> **<sup>U</sup>**⊥, and the optimal power allocation can be derived as

$$p\_i = \left(\nu - \frac{1}{\lambda\_i^{\perp}}\right)^+, \ i = 1, \ldots, M\_s \tag{4.26}$$

where λ<sup>⊥</sup> *<sup>i</sup>* is the diagonal element of <sup>⊥</sup> and ν is the dual variable associated with the transmit power constraint. Here we can see that, by using P-SVD, it has (**U**⊥)*<sup>H</sup>* **g**ˆ = 0. Since **S** = **U**⊥ (**U**⊥)*<sup>H</sup>* , we have **gSg***<sup>H</sup>* = **0**, which indicates that the interference power produced to the PU can be perfectly avoided.

#### **4.4.1.2 Multiple Multi-antenna PUs**

With multiple PUs which are equipped with single or multiple antennas, the transmission of the SU-Tx can be designed by considering the following two cases.

#### (1) *MISO Secondary Channel, M*sr = 1

Since the closed-form solution of the optimal **S** is hard to achieve in this case, efficient numerical optimization method can be proposed to solve the equivalent problem:

$$\max\_{\mathbf{v}} \quad \left\| \mathbf{h} \mathbf{v} \right\|^2 \tag{P4-8}$$

$$\begin{aligned} \text{s.t.} \quad & \left\| \mathbf{v} \right\|^2 \le P\_l\\ & \left\| \mathbf{G}\_k \mathbf{v} \right\|^2 \le \mathcal{Q}\_k, \ k = 1, \dots, K \end{aligned}$$

Although both of the constraints in P4-8 specify convex set of **v**, the non-convexity of the objective function makes the overall problem non-concave in its current form. However, we can observe that given any value of θ, *e <sup>j</sup>*<sup>θ</sup>**v** satisfies the constraints of P4-8, if **v** satisfies these constraints. At the meantime, the objective value is maintained. Thus, we can assume that **hv** is a real number, and P4-8 can be transformed to

$$\max\_{\mathbf{v}} \quad \text{Re}(\mathbf{h}\mathbf{v}) \tag{P4-9}$$

$$\begin{aligned} \text{s.t.} \quad \text{Im}(\mathbf{h}\mathbf{v}) &= 0 \\ \|\mathbf{v}\|^2 &\le P\_l \\ \|\mathbf{G}\_k \mathbf{v}\|^2 &\le Q\_k \end{aligned} \tag{P6-9}$$

This problem can be cast as a second-order cone programming (SOCP) [32], which can be solved by standard numerical optimization software.

#### (2) *MIMO Secondary Channel, i.e., M*sr > 1

In this case, the D-SVD and the P-SVD methods which are proposed for the one single-antenna PU can be used. Specifically, the multi-level water-filling power allocation by using D-SVD in this case becomes

$$p\_i = \left(\frac{1}{\nu + \sum\_{k=1}^{K} \sum\_{j=1}^{M\_k} \alpha\_{i,k,j} \mu\_k} - \frac{1}{\lambda\_i}\right)^+, \ i = 1, 2, \dots, M\_s \tag{4.27}$$

where α*<sup>i</sup>*,*k*,*<sup>j</sup>* = **g***k*,*j***u***i* 2. ν and μ*<sup>k</sup>* are the non-negative dual variables associated with the transmit power constraint and the interference power constraint for PU*<sup>k</sup>* , respectively. For P-SVD method, we construct the matrix of channel from the SU-Tx to all primary receivers/antennas, denoted as **<sup>G</sup>** <sup>∈</sup> <sup>C</sup>*Mk*×*M*st , by taking each **<sup>g</sup>***<sup>k</sup>*,*<sup>j</sup>* as the *<sup>k</sup> k* <sup>=</sup><sup>1</sup> *Mk* <sup>−</sup><sup>1</sup> + *j* th row of the matrix. Then, the SVD of **G** = [**G***<sup>T</sup>* <sup>1</sup> ,..., **G***<sup>T</sup> K* ] *<sup>T</sup>* can be expressed as **G** = **Q***<sup>G</sup>* 1/2 *<sup>G</sup>* **U***<sup>H</sup> <sup>G</sup>* . Thus, given *M*st > *Mk* (otherwise, the projection will be trivial), the projection of **H** can be expressed as **H**<sup>⊥</sup> = **H**(**I** − **U***G***U***<sup>H</sup> <sup>G</sup>* ).

**Fig. 4.5** The three-phase protocol for the cognitive MIMO system

#### *4.4.2 Learning-Based Joint Spatial Spectrum Design*

In this part, we investigate the cognitive MIMO by solving the two problems:


For simplicity, we consider there are two PUs in the primary system, i.e., *K* = 2, and only one of the PUs is located within the coverage of secondary transmission. However, the proposed method is applicable without this assumption by using the EIC which has been introduced in the previous section.

To enable the CR transmission, a three-phase protocol is proposed as shown in Fig. 4.5, whose interpretation is as follows.


It is worth noting that the parameter τ*<sup>l</sup>* plays an important role in the CR performance. Intuitively, a larger τ*<sup>l</sup>* might be preferred in terms of better space estimation, so that the interference to and from the PUs can be minimized. However, increasing learning time will decease the data transmission time, if the training duration is fixed. This harms the CR throughput. Moreover, taking the interference constraints into consideration during training and data transmitting, the freedom of power allocation is reduced. Thus, to investigate the CR performance, the lower bound of the secondary ergodic capacity is evaluated, which is related to both the channel-estimation error and the interference leakage to and from the PUs [33]. The lower bound of the CR ergodic capacity is then maximized by optimizing the transmit power and the time allocation over learning, training and transmission stages. A closed-form optimal power allocation can be found for a given time allocation, whereas the optimal time allocation can be found via two-dimensional search over a confined set [21].

#### **4.5 Cognitive Multiple-Access and Broadcasting Channels**

In the previous sections, the CR system under investigation has only one pair of SUs. In this section, we present the CR system that contains multiple transmitters or receivers, which forms the cognitive multiple-access channel (C-MAC) and the cognitive broadcasting channel (C-BC), respectively.

#### *4.5.1 Cognitive Multiple-Access Channel*

In some practical scenarios, there are multiple SUs concurrently transmit signals to their common receiver, such as the base station (BS) in the cellular networks or the WiFi access point (AP). Such a secondary system can be modelled as the C-MAC as is shown in Fig. 4.6. In this model, *N* SUs concurrently transmit signals to the BS by sharing the primary spectrum. There are *K* PUs, each of which is equipped with single antenna. To enable the multi-access of the SUs, the BS is equipped with *Mr* receive antennas. Denote **<sup>H</sup>** = [**h**1,..., **<sup>h</sup>***<sup>N</sup>* ] ∈ <sup>C</sup>*Mr*×*<sup>N</sup>* and **<sup>H</sup>**˜ = [**h**˜ <sup>1</sup>,..., **<sup>h</sup>**˜ *<sup>K</sup>* ] ∈ <sup>C</sup>*Mr*×*<sup>K</sup>* as the channel matrices from the SUs and the PUs to the BS, respectively. The signal vector received by the BS can be written as

$$\mathbf{y} = \mathbf{H}\mathbf{x} + \mathbf{H}\tilde{\mathbf{x}} + \mathbf{z} \tag{4.28}$$

where **x** and **x**˜ are the vectors of transmit signal from the SUs and the PUs, and **z** is the AWGN vector whose entries are assumed to be with zero mean and variance *N*0. Then, the following two optimization problems can be formulated.

#### (1) *Sum-Rate Maximization Problem*

With the aim of maximizing the total transmission rate of all the *N* SUs, the sum-rate maximization problem for the single-input multiple-output multiple-access channel can be formulated as

$$\max\_{\mathbf{U}, \mathbf{p}} \quad \sum\_{n=1}^{N} r\_n \tag{\mathbf{P4-10}}$$

$$\text{s.t.} \quad p\_n \le P\_t, \ n = 1, \ldots N \tag{4.29}$$

$$\mathbf{g}\_k^T \mathbf{p} \le \mathcal{Q}\_k, \ k = 1, \dots, K \tag{4.30}$$

where **U** = [**u**1,..., **u***<sup>N</sup>* ] with **u***<sup>i</sup>* denoting the beamforming vector of SU*<sup>i</sup>* , and *ri* is the information rate of SU*<sup>i</sup>* . Equation (4.29) is the peak transmit power constraint with *Pt* being the maximum allowable transmit power. Equation (4.30) indicates the interference power constraints where **g***<sup>k</sup>* is the channel power gain from the SUs to PU*<sup>k</sup>* and *Qk* is the interference temperature of PU*<sup>k</sup>* . Using the zero-forcing based decision feedback equalizer (ZF-DFE) at the BS and applying QR decomposition to the channel matrix **H**, the channel can be decomposed as independent subchannels, each of which is associated with one SU. This receiver can thus be viewed as receive beamforming, where the beamforming vector is determined by the QR decomposition of **H**. Thus, only the power vector **p** is remained to be optimized, and the objective of the problem can be rewritten as max **p** *<sup>N</sup> <sup>n</sup>*=<sup>1</sup> log  <sup>1</sup> <sup>+</sup> *pn*λ*<sup>n</sup> N*0 , where λ*<sup>n</sup>* is the effective channel gain.

In P4-10, if the interference constraints are replaced with the single sum transmit power constraint, the optimal power allocation can be derived as the conventional water-filling solution. The multiple interference power constraints complicate the solving of the problem, and thus, we solve the problem by considering the following two cases.

• *Single-PU case*: When there is only one PU, and thus there remains one interference power constraint, the optimal power allocation follows water-filling form. Different from the conventional water-filling power allocation which has a common water level, this solution has different water levels for different SUs. Moreover, each water level is upper-bounded by the individual maximum allowable transmit power. Therefore, this power allocation scheme is also referred to as capped multi-level (CML) water-filling. Figure 4.7 gives an example of the CML

water-filling, where we can see that the power allocated to each SU is limited by the minimum value between its specific water level and the water cap.

• *Multiple-PU case*: The method to solve P4-10 with multiple interference constraints is summarized as follows. The method first removes the non-effective interference constraints. Suppose *m* effective constraints remain. It starts with the sub-problems with a single interference constraint. For the case of*i* constraints, we select *i* out of *N* constraints (thus, there are C*<sup>i</sup> <sup>m</sup>* combinations) and check whether the solution of the sub-problems also satisfies the remained (*m* − *i*) constraints. If yes, this solution is globally optimal; otherwise, we continue to search the case of (*i* + 1).

#### (2) *SINR Balancing Problem*

Taking the fairness among the SUs into consideration, the SINR balancing problem is formulated as

$$\begin{aligned} \max\_{\mathbf{U}, \mathbf{p}} \min\_{1 \le n \le N} \quad & \frac{\chi\_n(\mathbf{u}\_n, \mathbf{p})}{\chi\_{n,0}} \\ \text{s.t.} \quad & (4.29), \ (4.30) \end{aligned} \tag{P4-11}$$

where γ*<sup>n</sup>*,<sup>0</sup> is the target SINR of SU*<sup>n</sup>* and γ*n*(**u***n*, **p**) is the SINR of SU*<sup>n</sup>* which can be derived as

$$\gamma\_n(\mathbf{u}\_n, \mathbf{p}) = \frac{p\_n \mathbf{u}\_n^H \mathbf{R}\_n \mathbf{u}\_n}{\mathbf{u}\_n^H \left(\sum\_{i \neq n} p\_i \mathbf{R}\_i + N\_0 \mathbf{I} + \sum\_{k=1}^K \tilde{p}\_k \tilde{\mathbf{R}}\_k\right) \mathbf{u}\_n} \tag{4.31}$$

where **R***<sup>i</sup>* = **h***i***h***<sup>H</sup> <sup>i</sup>* , **R**˜ *<sup>k</sup>* = **h**˜ *<sup>k</sup>***h**˜ *<sup>H</sup> <sup>k</sup>* and *p*˜*<sup>k</sup>* is the transmit power of PU*<sup>k</sup>* . By investigating the property of P4-11, we can see that (1) the *N* power constraints and *K* interference constraints can be equally treated; (2) there is only one dominant constraint in the problem, and thus the problem can be decoupled into (*N* + *K*) sub-problems each of which is with single constraint; (3) the sub-problems can be sequentially solved, which profoundly reduces the complexing of the algorithm. In fact, when one solution of a sub-optimal problem is obtained, we can check whether it satisfies the other

constraints. If yes, it can be treated as the global optimum without solving the other sub-problems.

#### *4.5.2 Cognitive Broadcasting Channel*

When a BS is equipped with *Mt* antennas and broadcasts information to *N* SUs, cognitive broadcasting channel (C-BC) is built. A typical C-BC model is shown in Fig. 4.8, in which **g** denotes the vector of channel from the BS to the PU. The SU*<sup>n</sup>* has *Mn* antennas. The precoding design for the C-BC is different from that for the conventional MIMO-BC, because the transmission of the BS is restricted not only by a sum-power constraint, but also by an interference power constraint. In literatures, the MIMO-BC precoding design is solved by establishing the BC-MAC duality. As one type of BC-MAC duality, the *conventional BC-MAC duality* is proposed to derive the capacity region of MIMO-BC under a sum-power constraint [34, 35]. As another type of BC-MAC duality, the *minimax duality* can obtain any boundary point of a broadcasting channel capacity region under the single sum-power constraint or multiple linear transmit covariance constraints (LTCC) [36, 37]. For solving the C-BC precoding problem which is restricted by both of the sum-power constraint and the interference power constraint, the *general BC-MAC duality* is proposed which handles the multiple general LTCCs and simplifies the problem formulation [24].

The general LTCC is expressed as

$$\text{Tr}(\mathbf{Q}\mathbf{A}) \le J \tag{2.32}$$

where **Q** is the transmit covariance matrix, **A** is a positive semidefinite matrix, and *J* is a predefined threshold. The general LTCC includes various practical power constraints, such as

• *Total transmit power constraint*: if **A** is an identity matrix;


Then, the C-BC precoding problem can be formulated with subject to any combination of the above constraints. For demonstrating how to transform the C-BC precoding problem to its dual C-MAC problem, we take the following weighted sum-rate maximization problem as an example.

$$\max\_{\mathbf{U}\_i^\flat} \quad \sum\_{n=1}^N w\_n r\_n \tag{P4-12}$$

$$\text{s.t.} \quad \sum\_{n=1}^N \text{Tr}(\mathbf{U}\_n^b) \le P$$

$$\sum\_{n=1}^N \mathbf{g}^H \mathbf{U}\_n^b \mathbf{g} \le \mathcal{Q}$$

where *rn* and*wn* are the achievable rate and the weight coefficient of SU*<sup>n</sup>* , respectively. **U***b <sup>i</sup>* denotes the precoding matrix of the BS. By applying the general BC-MAC duality, non-negative auxiliary variables *qt*, *qu* are introduced, with which P4-12 can be transformed to

$$\begin{aligned} &\min\_{q\_l, q\_u} \max\_{\mathbf{U}\_n^b} \sum\_{n=1}^N w\_n r\_n \\ &\text{s.t. } q\_u \left(\sum\_{n=1}^N \text{Tr}(\mathbf{U}\_n^b) - P\right) + q\_l \left(\sum\_{n=1}^N \mathbf{g}^H \mathbf{U}\_n^b \mathbf{g} - \mathcal{Q}\right) \le 0, \end{aligned}$$

Letting *J* = *qt P* + *qu Q*, the equivalent C-BC problem can be written as

$$\begin{aligned} \max\_{\mathbf{U}\_n^b} &\quad \sum\_{n=1}^N w\_n r\_n \\ \text{s.t. } &q\_u \sum\_{n=1}^N \text{Tr}(\mathbf{U}\_n^b) + q\_l \sum\_{n=1}^N \mathbf{g}^H \mathbf{U}\_n^b \mathbf{g} \le J \end{aligned}$$

based on which the dual C-MAC problem can be written as

$$\begin{aligned} \max\_{\mathbf{U}\_i^m} & \quad \sum\_{n=1}^N w\_n r\_n^m\\ & \text{s.t. } \sum\_{n=1}^N \text{Tr}(\mathbf{U}\_n^m) \sigma^2 \le J \end{aligned} $$

where the noise covariance matrix is *qt***gg***<sup>H</sup>* + *qu* **I**.

#### **4.6 Robust Design**

The CSI, including the C-CSI and S-CSI, is critical for the CR system to control interference and optimize its performance. In practice, the CSI obtained by the SU is normally imperfect, for which robust design is needed to be identified so that the cognitive transmission strategy is less sensitive to the uncertainty in the CSI. In the literature, there are a few of related robust designs. One kind of ideas is to design the robust beamforming so that a high probability that the interference power constraint is satisfied can be achieved. Another kind of ideas is to model the uncertainty in related CSI with boundary and design the robust beamforming to guarantee the interference power constraint. In this part, we consider two scenarios, i.e., only the C-CSI contains uncertainty [38] and both of the C-CSI and S-CSI contain uncertainty [26], respectively.

#### *4.6.1 Uncertain Interference Channel*

To focus on the uncertainty in the interference channel, we consider that the secondary S-CSI is perfectly known by the SU-Tx, and the uncertainty in the interference channel is caused by the PU in moving environment or caused by the indistinguishable PU-Rx due to mutual transmission between two PUs in TDD mode. In both cases, the PU can be protected by considering that the degree of arrival (DoA) varies within a certain range. The model of the system can be referred to Fig. 4.9 by letting *K* = 1 and *N* = 1, meaning that there is a single PU and a pair of SU-Tx and SU-Rx. To characterize the interference channel, we use the spatial multipath model. Let *L* and θ(*l*) be the number of multipaths and the DoA of the *l*th path, respectively. The fading coefficient of the *l*th path can be denoted by α*<sup>l</sup>* . Then, the channel from the SU-Tx to the PU can be expressed as

$$\mathbf{g} = \sum\_{l=1}^{L} \alpha\_l \mathbf{a}(\boldsymbol{\theta}^{(l)}) \tag{4.32}$$

where **a**(θ(*l*) ) is the steering vector of the *l*th path. Note that given the angular spread of the PU, denoted by θ , the range of the DoA can be written as θ(*l*) ∈ [θ¯ − θ /2, θ¯ + θ /2], where θ¯ is the nominal DoA with respect ot the SU-Tx antenna array. Generally, if the DoA region of the PU, denoted by = [θ1, θ2], can be perfectly known, we can set θ¯ − θ /2 = θ<sup>1</sup> and θ¯ + θ /2 = θ2. If is unknown, we can choose a larger angular spread for estimating the position of the PU so that sufficient protection to the PU can be provided. The rate optimization problem with the aim of maximizing the secondary throughput can be formulated as

max **<sup>w</sup>** *<sup>r</sup>* (P4-13) *<sup>s</sup>*.*t*. <sup>|</sup>**a***<sup>H</sup>* (θ(*l*) )**w**| <sup>2</sup> <sup>≤</sup> *<sup>Q</sup>*, <sup>∀</sup>θ(*l*) <sup>∈</sup> **w** <sup>2</sup> <sup>≤</sup> <sup>1</sup>

where *r* is the downlink rate that achieved by the secondary transmission, which can be derived as *r* = |**h***<sup>H</sup>* **w**| 2. The first constraint is the interference power constraint and the second constraint is the transmit power constraint in which the maximum peak transmit power is normalized as one. Thus, similar to P4-8, the problem can be transformed to

$$\begin{aligned} \max\_{\mathbf{w}} & \text{Re}[\mathbf{h}^H \mathbf{w}] \\ & s.t. \, \text{Im}[\mathbf{h}^H \mathbf{w}] = 0, \\ & \quad |\mathbf{a}^H(\theta^{(l)}) \mathbf{w}| \le \sqrt{Q}, \,\,\forall l \\ & \quad \|\mathbf{w}\|^2 \le 1 \end{aligned} \tag{4.33}$$

Such a robust beamforming design can allocate the majority of the SU transmit power along the SU-Rx DoA with refraining the transmit power along the DoA of the PU below the interference temperature.

#### *4.6.2 Uncertain Interference and Secondary Signal Channels*

This part discusses the robust beamforming design for a multi-user MISO system to address the uncertainty in both of the C-CSI and the secondary S-CSI. Only partial knowledge of these channels are available. As shown in Fig. 4.9, the SU-Tx with *M* antennas transmits independent signal to the *N* SU-Rx's, each of which is equipped with single antenna. The channel from the SU-Tx to the *n*th SU-Rx is denoted by **<sup>h</sup>***<sup>n</sup>* <sup>∈</sup> <sup>C</sup>*<sup>M</sup>*×1. The uncertainty in **<sup>h</sup>***<sup>n</sup>* is described by the Euclidean ball

$$\mathcal{H}\_n = \left\{ \mathbf{h} : \|\mathbf{h} - \tilde{\mathbf{h}}\_n\| \le \delta\_n \right\} \tag{4.34}$$

**Fig. 4.9** The system model for robust design

where **h**˜ *<sup>n</sup>* is the actual channel to the *n*th SU-Rx, and δ*<sup>n</sup>* > 0 is the radius of the Euclidean ball. Then, the channel from SU-Tx to the *n*th SU-Rx can be modelled as

$$\mathbf{h}\_n = \mathbf{h}\_n + \mathbf{a}\_n, \ n = 1, \ldots, N \tag{4.35}$$

where **a***<sup>n</sup>* is a norm-bounded uncertainty vector with **a***n* ≤ δ*n*. Similarly, the channel from the SU-Tx to PU*<sup>k</sup>* can be modelled as

$$\mathbf{g}\_k = \mathbf{\tilde{g}}\_k + \mathbf{b}\_k, \ k = 1, \dots, K \tag{4.36}$$

and the uncertainty set of **g***<sup>k</sup>* is G*<sup>k</sup>* . Denoting the SU-Tx precoding matrix by **W** = [**w**1,..., **<sup>w</sup>***<sup>N</sup>* ] ∈ <sup>C</sup>*K*×*<sup>N</sup>* , the total transmit power of the SU-Tx, denoted by *Ps*, can be derived as <sup>E</sup>[ **x** <sup>2</sup>] = *<sup>N</sup> <sup>n</sup>*=<sup>1</sup> **w***n* 2. At the receiver side, the SINR at the *n*th SU-Rx can be derived as <sup>γ</sup>*<sup>n</sup>* <sup>=</sup> <sup>|</sup>**w***<sup>H</sup> <sup>n</sup>* **h***<sup>n</sup>* | 2 *<sup>N</sup>*0+*<sup>N</sup> <sup>i</sup>*=1,=*<sup>n</sup>* <sup>|</sup>**w***<sup>H</sup> <sup>n</sup>* **h***<sup>n</sup>* | <sup>2</sup> . The interference received by PU*<sup>k</sup>* , denoted by *P<sup>k</sup>* int, can be derived as *<sup>N</sup> <sup>n</sup>*=<sup>1</sup> <sup>|</sup>**w***<sup>H</sup> <sup>n</sup>* **g***<sup>k</sup>* | 2. With *<sup>n</sup>* and *Qk* representing the target SINR of the *n*th SU-Rx and the interference temperature of PU*<sup>k</sup>* , the beamforming design problem can be formulated as

$$\begin{aligned} \min\_{\mathbf{W}} \quad & P\_s \\ \text{s.t.} \quad & \boldsymbol{\gamma}\_n \ge \boldsymbol{\Gamma}\_n, \ \forall \mathbf{h} \in \mathcal{H}\_n \text{ and } \forall n \\ & P\_{\text{int}}^k \le \mathcal{Q}\_k, \ \forall \mathbf{g}\_k \in \mathcal{G}\_k \text{ and } \forall k \end{aligned} \tag{P4-14}$$

This problem aims to minimize the transmit power of the SU-Tx with guaranteeing the QoS requirement for each SU-Rx and keeping the interference received by each PU below its interference temperature. Note that the constraints should be satisfied under all possible channel conditions with the bounded uncertainty. In another word, the QoS of SUs and the interference constraints should be satisfied for the worst case, i.e., the constraints can be transformed as min **h***n*∈H*<sup>n</sup>* γ*<sup>n</sup>* ≥ *n*, ∀*n* and max **g***k*∈G*<sup>k</sup> Pk* int ≤ *Qk* , ∀*k*. Thus, before solving P4-14, the problem min **<sup>h</sup>***n*∈H*<sup>n</sup>* γ*<sup>n</sup>* and max **g***k*∈G*<sup>k</sup> Pk* int should be solved first. For solving these problems, loose bounds, strict bounds and exact robust methods are proposed in [26], which shows that the robust design allows the SU-Tx transmit with higher power than the non-robust design, and thus can achieve better secondary performance.

#### **4.7 Application: Spectrum Refarming**

Applying the CSA technique in cellular networks is by no mean a trivial task [39]. Although the resource allocation for the traditional cellular networks has been extensively investigated both in single-cell [40] and multi-cell scenarios [41], the spectrum sharing among cellular networks is challenging due to the additional interference power constraint. Moreover, the concrete characteristics of each cellular network, such as the infrastructure deployment and the radio access technique (RAT) profoundly affect the CSA design. Quite a few of literatures have investigated the spectrum sharing between systems with the same RAT. For example, an orthogonal frequency division multiple access (OFDMA) secondary system shares the spectrum of an OFDMA primary system, or both of them are CDMA-based. In fact, due to the explosive growth of the fourth generation (4G) wireless traffic, spectrum sharing among OFDMA systems will be increasingly difficult as the 4G licensed spectrum has been crowded. In addition, since the 4G wireless network outperforms the second generation (2G) and the third generation (3G) in terms of peak data rate, latency and throughput, the legacy subscribers have been migrating to the 4G cellular networks. The out-moving of the legacy users decreases the utilization of the legacy licensed spectrum, which thus provides sharing opportunity for the 4G networks. To this end, the CSA between different generations of cellular networks, which is known as *spectrum refarming* (SR), attracts more attentions in recent years.

There are two SR models, i.e., the *opportunistic SR* model and the *concurrent SR* model, which are developed based on OSA and CSA, respectively.


of CDMA users [44]. When the number of CDMA users decreases, each CDMA user will experience less inter-user interference. Thus, they can tolerate an amount of interference introduced by the OFDMA system, with which the target SINR of the CDMA user can be maintained.

In what follows, we discuss the OFDMA/CDMA concurrent SR. The key challenges to be addressed include: (1) Quantification of interference temperature: In related literatures, the interference temperature is usually given as a predefined threshold without any justification [45, 46]; (2) Joint optimization of the primary and secondary resource allocation: By taking the interference from the PU-Tx to the SU-Rx into consideration, the primary and secondary power allocation can be jointly optimized through exploiting the primary inner power control scheme. (3) Robust power allocation: Without the information of C-CSI, robust power allocation should be designed for the OFDMA system to provide sufficient protection to CDMA users. The study also extends to the SR of multi-band CDMA system [47] and the heterogeneous SR systems [48, 49].

#### *4.7.1 SR with Active Infrastructure Sharing*

For the easy of deployment, the OFDMA can share the same cell site and same BS antenna with the CDMA system, as shown in Fig. 4.10 (Scenario I). This kind of infrastructure sharing is known as active infrastructure sharing. Take the wideband CDMA uplink as an example. In practice, it operates at a 5 MHz bandwidth with the chip rate of 3.84 Mcps. The spreading gain can vary from 2 to 256 [50]. The LTE can adopt 256 subcarriers when working at 5 MHz mode with subcarrier spacing of 15 kHz. The sampling rate is thus 15 kHz × 256 = 3.84 MHz that equals the wideband CDMA chip rate. Thus, the two systems can easily get synchronized with the same clock reference.

#### (1) *Quantification of Interference Temperature*

To quantify the interference temperature provided by the CDMA users, the SINR of CDMA users with the interference from OFDMA system should be derived. Given the number of CDMA users (denoted by *U*) and the spreading gain (denoted by *N*), the SINR of CDMA user is determined by the specific spreading codes assigned among users and the instantaneous S-CSI of the CDMA system. Due to the lack of cooperation between the CDMA and OFDMA systems, these information is unknown by the OFDMA system, and thus it is hard for the OFDMA system to predict the CDMA SINR. By considering a large-dimension system where *<sup>U</sup>*, *<sup>N</sup>* → ∞ and *<sup>U</sup> N* approaches a finite constant, the SINR of the CDMA users approaches an asymptotic value which is independent with the specific codes and instantaneous S-CSI. Thus, by limiting the asymptotic SINR to be no less than the target SINR, the closed-form interference temperature can be obtained [51].

**Fig. 4.10** Different scenarios of the concurrent SR. Scenario I: SR with active infrastructure sharing; Scenario II: SR with passive infrastructure sharing; Scenario III: SR in heterogeneous networks

#### (2) *Joint Resource Optimization of CDMA and OFDMA Systems*

Note that the interference temperature of the CDMA system is a function of the transmit power of the CDMA user. A larger transmit power provides a higher interference temperature but also introduces higher interference to the OFDMA user. Thus, there exists an optimal CDMA transmit power to maximize the OFDMA throughput. An efficient algorithm was proposed in [52] to solve the joint resource optimization of the CDMA and OFDMA systems by investigating the convexity of the problem over the CDMA transmit power and the OFDMA resource allocation. Moreover, although the transmit power of CDMA and OFDMA systems are jointly optimized, it is unnecessary to inform the CDMA user with the optimal value of the transmit power in practice. In fact, once the OFDMA system operates with its optimal transmit power and subcarrier allocation, so as the CDMA system due to the inner power control of the CDMA system.

#### *4.7.2 SR with Passive Infrastructure Sharing*

Passive infrastructure sharing refers to the sharing of passive elements in their radio access networks, such as cell sites. When the SR technique is applied with passive infrastructure sharing, the licensed legacy system and the unlicensed system are equipped with separate BS antennas, as shown in Fig. 4.10 (Scenario II). Intuitively, this additional BS antenna should bring along more diversity that can be exploited by the OFDMA system to improve the refarming performance [11, 53, 54]. However, without active participation of the legacy system, it is difficult to obtain the C-CSI, which is the necessary information for the OFDMA system to predict the produced interference.

To solve this problem, a robust resource allocation scheme was proposed in [55], where the S-CSI of the OFDMA system is used as the C-CSI to predict the interference power. It has been proved that under this scheme, the CDMA service can be over-protected, i.e., the actual interference power is always no larger than the interference temperature. Furthermore, to fully utilize the interference temperature, an iterative resource allocation scheme is proposed which gradually increases the transmit power of OFDMA users until the actual interference received by the CDMA system reaches the interference temperature.

#### *4.7.3 SR in Heterogeneous Networks*

To provide high throughput and seamless coverage for the wireless communications, small cells have been proposed to overlay the existing cellular networks [56]. Conventionally, small cells are deployed to share the radio spectrum by using the same RAT with the macrocell [57, 58]. By doing so, the small cells can offload macrocell traffic directly. However, they inevitably introduce interference to the macrocell users and thus degrade their performance. To address this problem, SR in heterogeneous networks is a viable solution.

Consider a heterogeneous network as shown in Fig. 4.10 (Scenario III), where multiple OFDMA small cells share the spectrum of CDMA macrocell. Specifically, the downlink of small cells share the spectrum used for the CDMA uplink, since the uplink traffic of the CDMA system is normally lighter than the downlink. By quantifying the interference power produced by each small cell, the resource allocation problem can be formulated, where the objective is to maximize the total throughput of all small cells and the constraints are the total interference power constraint and individual transmit power constraint. The problem is transformed to optimize the transmit power and the allocation of interference temperature among small cells [48].

In practice, due to the limited signaling between the macrocell and the small cells, the C-CSI between the small cell BS (SBS) and macrocell base station (MBS) is usually absent. Since the C-CSI accounts for the distance-based path loss, the small-scale fading and the large-scale shadowing, only the latter two are to be determined, as the distance between SBS and MBS is fixed and can be easily known from the global geographical information. It is found that the optimal power allocation for the SR heterogeneous networks is essentially independent with the fading and shadowing components of the C-CSI and is only related to the distance-based path loss. Therefore, the need of instantaneous information about the fading and shadowing of C-CSI can be avoided.

#### **4.8 Summary**

In this chapter, we have discussed the CSA technique by introducing the singleantenna CSA system, the multi-antenna cognitive beamforming, the cognitiveMIMO, the C-MAC and C-BC, and the robust design for the CSA system. The application of the CSA technique to operating the LTE cellular system on the legacy spectrum, also known as the spectrum refarming, has been discussed. Several critical problems in the CSA have been addressed, including the absence of the interference channel and signal channel knowledge, the optimal beamforming and multiplexing, as well as the interference avoidance and suppression.

#### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 5 Blockchain for Dynamic Spectrum Management**

**Abstract** Blockchain is believed to bring new opportunities to dynamic spectrum management (DSM). With features of blockchain, the traditional spectrum management method, such as the spectrum auction, can be improved. It can also help to overcome the challenges about the security or the lack of incentive mechanisms for collaboration in DSM. Moreover, with blockchain, spectrum usage of the DSM system can be recorded in a decentralized manner. In this chapter, we will discuss the potentials of blockchain for spectrum management in a systematic way and using multiple case studies.

#### **5.1 Introduction**

Recently, blockchain has received increasing attention, with bitcoin [1] supported by it being the most famous cryptocurrency. Blockchain is essentially an open and distributed ledger, with some key characteristics such as immutability, transparency, decentralization and security. The main idea behind blockchain is to distribute the validation authority of the transactions to a community of the nodes and to use the cryptographic techniques to guarantee the immutability of the transactions. Far from being used only as a ledger, blockchain has been able to support various kinds of cyptocurrencies and smart contracts, which autonomously executes agreements reached between nodes in blockchain networks.

The aforementioned characteristics of blockchain make it beneficial in many areas in communications. For examples, with the encryption algorithms, blockchain has been used to guarantee the integrity of data in the Internet of Things (IoT) [2], and with the traceability, blockchain has been used to design a collaborated video streaming framework for Mobile Edge Computing (MEC) [3]. Moreover, blockchain is seen as a promising technology to achieve more efficient dynamic spectrum management (DSM) [4, 5]. According to Federal Communications Commission (FCC), blockchain could be used to reduce the administrative expenses of dynamic spectrum access systems and thus increase the spectrum efficiency [6].

As a secure ledger, blockchain has been introduced to record the spectrum auction initiated by the licensed users [7]. With the use of blockchain, spectrum transactions are recorded and maintained by all the users in an immutable and verifiable manner. Moreover, a dynamic spectrum access system featuring secure cooperative sensing is proposed with the use of blockchain [8]. In such a system, the opportunity of spectrum access is first explored by cooperative sensing and the access right is then allocated through an auction, with all the information of the spectrum auction being securely stored in a blockchain. Besides, the use of a smart contract, which is built on the top of blockchain, has also been explored to execute the spectrum sensing service provided for secondary users [9].

In this chapter, we first give a brief overview of blockchain. Then, from a systematic view, we give some basic principles to illustrate how and why blockchain can be used in DSM, and also address the cost and challenges of using the blockchain. Several instances of blockchain for DSM are then introduced. Finally, a conclusive summary of this chapter is given.

#### **5.2 Blockchain Technologies**

Blockchain is essentially an open and distributed database maintained by nodes in a Peer-to-Peer (P2P) network. When a blockchain is used to record transactions between nodes, it can be seen as a distributed ledger. Through cryptographic techniques, the transactions recorded in a blockchain are tamper-resilient; and by distributing copies of the ledger to all the nodes in the network, a blockchain is robust to single point of failures compared to a centralized ledger. In this section, we will give an overview of the blockchain technology, summarize its features and introduce the smart contract, which is an important application of blockchain.

#### *5.2.1 Overview of Blockchain*

We give an overview of blockchain from the following five aspects, including the blockchain structure, consensus algorithm, solution of discrepancy in the nodes, digital signature and types of blockchain. Finally, we will illustrate the work flow of a blockchain.

*Blockchain Structure*: In a blockchain network, transactions are validated by a community of nodes and then recorded in a *block*. As shown in Fig. 5.1, a block is composed of a header and a body, in the latter of which the transaction data is stored. The block header contains the hash of the previous block, a timestamp, Nonce and the Merkle root. The hash value is calculated by passing the header of the previous block to a hash function. With the hash of the previous block stored in the current block, blockchain is thus growing with new blocks being created and linked to it. Moreover, this guarantees that tampering on the previous block will efficiently detected. The timestamp is to record the time when a block is created. Nonce is used in the creation and verification of a block. The Merkle tree is a binary tree with each leaf node

**Fig. 5.1** The structure of a Blockchain. A block is composed of a header and a body, where a header contains the hash of previous block, a timestamp, Nonce and the Merkle root. The Merkle root is the root hash of a Merkle tree which is stored in the block body. We denote a transaction as TX and take the 3-th block, which only contains four transactions, as an example to illustrate the structure of a Merkle tree

labelled with the hash of one transaction stored in the block body, and the non-leaf nodes labelled with the concatenation of the hash of its child nodes. Merkle root, i.e., the root hash of a Merkle tree, is used to reduce the efforts to verify the transactions in a block. Since a tiny change in one transaction can produce a significantly different Merkle root, the verification can be completed by simply comparing the Merkle root instead of verifying all the transactions in the block.

*Consensus Algorithm*: As a distinctive feature, blockchain eliminates the need for a trusted third-party to validate the transactions. Instead, a *consensus* is reached between all the nodes before a block, recording multiple transactions, is included into the blockchain. Essentially, a consensus algorithm is used to regulate the creation of a block in an unbiased manner to resist malicious attack. There are different consensus algorithms, such as Proof of Work (PoW), Proof of Stake (PoS) and Practical Byzantine Fault Tolerance (PBFT), to adapt to the blockchain of different types and the performance requirements in different applications.

PoW is widely used in blockchain networks such as bitcoin. With PoW, a new block is created when a random number called *Nonce* is found. The nonce can be verified by checking if the hash of the block header, added with Nonce, satisfy certain conditions. Due to the characteristic of hash function, Nonce is easy to verify but can only be found by trial and error. Thus, devoting computation resources to find a valid Nonce can be seen as a form of *work* to create a new block. The success of finding Nonce is thus the *proof* of the work one node has done. To incentivize the nodes to participate in mining, network tokens and transaction fees will be rewarded to the miner which successfully publishes a block. The process of creating a new block is thus called *mining* and the node who participates in mining is called a *miner*.

PoS is another consensus algorithm, with the objective to reduce the intensive computation in the PoW algorithm. PoS is first used in Peercoin, in which the right to publish a new block is still granted by allowing nodes to compete to solve a mathematical problem as in PoW, i.e., to find a valid Nonce. However, the difference lies in the difficulty of solving the problem, which is inversely proportional to the tokens and the holding time of these tokens that a node has. In particular, with more tokens and longer time of holding the tokens, the difficulty of mining for a node reduces. Further, the problem-solving process is eliminated in the latter PoS algorithms, and the block creator is elected based on the stakes the nodes hold [10]. With PoS, the computational resources one node occupies no longer determine the probability that it successfully finds a new block, and thus the computational resources required to reach a consensus can be largely reduced.

PBFT [11], is a practical voting-based algorithm that allows a consortium of nodes to reach consensus without the assumption of synchronization among them. With a Byzantine Fault Tolerance (BFT), nodes can still reach consensus even when there are some faulty nodes, i.e., byzantine nodes which can behave arbitrarily. There are two kinds of nodes in the PBFT algorithm, including a primary node and backup nodes. One node in the network, acting as a *client*, first issues transactions, as a *request* to the primary node, and the primary node decides the execution order of the request and then broadcasts it to all the other backup nodes. After receiving the request, the backup nodes check the authentication of the request, decide whether to execute the request and send replies to the clients. The consensus of the transaction is reached after the client receives *f* +1 ( *f* is denoted as the number of byzantine nodes) replies from different backup nodes with the same results. PBFT algorithm guarantees the security and liveness, i.e., a request from a client will eventually be replied, when there are less than *n*−1 <sup>3</sup> byzantine nodes, where *n* is denoted as the number of nodes which participate in the consensus process. PBFT eliminates the heavy computation as in PoW to elect a node to publish a new block. However, the benefit comes at the cost of requiring a high level of trust between the nodes to resist the *sybil attacks* [12] where a malicious party can create many nodes to bias the consensus toward itself. Thus, PBFT algorithm is usually used in consortium blockchain networks, e.g., Hyperledger Fabric.

*Solution to Discrepancy*: Since a blockchain is built upon a distributed network, it might take some time for all nodes in such a network to update a new block. Besides, there are multiple nodes mining at the same time. The latency of distributing a new block and the probability that another block is created during the latency make it possible that there exists more than one chains in the network at the same time. In this case, discrepancy about which chain is valid between the nodes arises. Specifically, nodes need to decide to believe one chain by working to extend it with a new block. The discrepancy is solved with the longest chain rule, i.e., the longest chain will be accepted with the other chains being discarded. A simple rationale behind this solution is that the longest chain is the chain that the majority of the nodes trust and work on extending. Over the long time scale, the solution guarantees that only one chain prevails.

*Digital Signature*: To verify the authentication and integrity of transactions, digital signatures based on asymmetric encryption are used in blockchain networks. Each node in a blockchain network has two keys, including a public key and a private key, and the content encrypted by the private key can only be decrypted by the private key. Before a node initiates/broadcasts a transaction, it first signs the transaction with its private key. Other nodes in the network can then verify the authenticity of the transaction using the public key. With the private key kept confidential to its owner and the public key accessible by all nodes, the authenticity and the integrity of transactions can be easily verified. Thus, one cannot masquerade as others to initiate transactions or to forge the contents in the initiated transactions.

*Types of Blockchain*: Based on the rule to regulate which nodes can access, verify and validate the transactions initiated by other nodes, blockchains are typically categorized into public blockchains, private blockchains and consortium blockchains to satisfy the requirements in different applications.


*Work Flow of Blockchain*: In Fig. 5.2, we show the work flow of a blockchain using the PoW consensus algorithm. Firstly, a transaction is initiated and broadcast to other nodes in the network. The nodes which receive the transaction use the digital signature to verify the authentication of the transaction. After verified, the transaction is appended to the list of valid transactions in the nodes. To record the verified transactions, nodes in the network work to publish the new block, i.e., find Nonce. Once one node finds a valid Nonce, it is allowed to publish a block which contains the initiated transaction. The other nodes then verify transactions in the block received by comparing the Merkle root, and once the transactions in the newly published block are proven to be authenticated and not tampered, the new block is added to the local replica of the blockchain. The update of the blockchain has been completed.

**Fig. 5.2** The work flow of a blockchain network

#### *5.2.2 Features and the Potential Attacks on Blockchain*

The features of a public blockchain are summarized below.


nodes continuing to create new blocks, the manipulation is hard to achieve, which makes the blockchain immutable.


Although relatively secure, the blockchain is still under the risk of multiple kinds of attacks, such as selfish mining attack, majority attack and Denial of Service (DOS) attack [13].


#### *5.2.3 Smart Contracts Enabled by Blockchain*

Smart contracts, enabled by the blockchain technology, are self-executing contracts without extra enforcement. The contractual clauses between nodes are converted

**Fig. 5.3** The generation and recording of smart contracts

into computer programs in a form such as "If-Then" statements. The executable computer programs are then securely stored in the blockchain. When the predefined conditions in smart contract are satisfied, the clauses in smart contracts will be executed autonomously, and the execution will be recorded as an immutable transaction in the blockchain.

The generation procedures of a smart contract are shown in Fig. 5.3, and the work flow of the smart contracts is demonstrated as follows. The involved nodes first negotiate to agree upon and sign contractual clauses. The approved clauses are further recorded in a transaction. Similar as other transactions, such a transaction which records the smart contract will be verified by other nodes and then appended to other transactions in a block. With the consensus algorithm, a block contains the smart contract will be added into the blockchain. The smart contract will then be allocated with a unique address, through which the nodes in the network can access or interact with it. Once some node sends transactions to that address or the conditions in the smart contract are satisfied, the corresponding clause in the smart contract will be strictly executed.

Bitcoin is known as the first cryptocurrency that supports basic smart contract in the sense that the network allows one user to transfer value to another. However, the limited programmability makes it impossible to support a smart contract with complex logic. Ethereum is the first public blockchain-based platform which supports the advanced smart contracts which are encoded by high level programming implementation.

#### **5.3 Blockchain for Spectrum Management: Basic Principles**

In this section, we will first provide the potential aspects from which the application of the blockchain technology can benefit DSM. Note that we mainly consider the spectrum management in a sharing use. If not specifically mentioned, all blockchains in this section are public blockchains. Then, we outline three different ways to deploy a blockchain network over a cognitive radio network. Finally, we will bring up and discuss challenges in the application of blockchain to DSM.

#### *5.3.1 Blockchain as a Secure Database for Spectrum Management*

Blockchain, as essentially an open and distributed database, can be used to record any kind of information as a form of transaction. On the other hand, spectrum management can benefit from the assistance of a database, such as a geo-location database for the protection of incumbent users in TV white spaces [14]. Based on this, one potential trend of applying blockchain to spectrum management is to record the information about spectrum management (Fig. 5.4).

One main reason of this application is that blockchain makes such information accessible to all the secondary users. Such kinds of information include the TV White Spaces, the spectrum auction results, the spectrum access history and the spectrum sensing results. Here, we discuss the benefits of recording these kinds of information on spectrum management.

*Information of TV White Spaces* and other underutilized spectrum bands can be dynamically recorded in a blockchain. In a secure blockchain, the information including interference protection requirements of the primary users and the spectrum usage with respect to time, frequency and geo-location of TV white spaces can be recorded. Compared to a traditional third party database, blockchain allows users directly control the data in the blockchain and thus guarantees the accuracy of data. Another concern of spectrum management is its dynamic characteristic. With the mobility of mobile secondary users or the variation of traffic demands of the primary users, the availability of spectrum bands might change dynamically. With the decentralization of blockchain, the information of idle spectrum bands can be dynamically recorded by primary users and easily accessed by all the unlicensed users. Moreover, by initiating a transaction, SUs can inform others their departure or arrival to some area, to help others to capture the potential spectrum opportunities in the area where they are located, to finally optimize their transmission strategies. Thus, the efficiency of spectrum utilization can be improved.

*Spectrum Access History* of the unlicensed spectrum bands can be recorded in a blockchain. With the existing access protocol such as Carrier Sensing Multiple Access with Collision Avoidance (CSMA/CA) and Listen-Before-Talk (LBT), the access is not needed to be coordinated. However, the access history needs to be recorded in the blockchain to achieve the fairness in all the users. For example, with the autonomous implementation of smart contracts, the users which are recorded to access the unlicensed spectrum bands up to a frequency threshold will be not allowed to access the same spectrum bands in a fixed period.

*Spectrum Auction Results* can also be recorded in a blockchain. Auction mechanisms have been shown as an efficient way for dynamic spectrum allocation [15]. Among the spectrum auctions, the *secondary auctions* are used when the licensed primary user (PU) shares the spectrum with secondary users (SUs). The sealedbid spectrum auctions, where the SUs as bidders send their bids to the PU who is auctioneer privately, can improve the efficiency of spectrum auction. Moreover, the second-price sealed-bid auction, can guarantee the *truthfulness* of spectrum auctions, which means that SUs will obtain optimal utilities by submitting the bid with respect to their true valuation of the spectrum bands, instead of deceiving the auctioneer. Although sealing the spectrum auction results can be beneficial from the above aspects, recording the auction results such as the bids and the *hammer price*, at which the SUs and the PU make a deal, after the auction is completed, is also important. Blockchain provides an secure and verifiable way to record such information. Specifically, recording of spectrum auction results in a blockchain can be beneficial from the following aspects.


*Spectrum Sensing Results* are another kind of information which can be stored in a blockchain. The sensing results stored in the blockchain can be used to map the spectrum usage of the primary networks and hence provide them an additional tool for monitoring and maintaining of their networks. Moreover, this could potentially encourage more licensed users to allow shared use of spectrum. Without the help of secondary users to submit the sensing reports, however, a cellular network operator can achieve the above objective by deploying a sensor network to monitor and record the spectrum usage in a blockchain. On the other hand, the sensing results recorded in the blockchain can be used as prior information when SUs need to choose which licensed spectrum bands to sense and access. In particular, SUs can estimate the utilization rate of different spectrum bands from the historical sensing results, and SUs can thus choose the spectrum bands with a relatively low utilization rate.

#### *5.3.2 Self-organized Spectrum Market Supported by Blockchain*

With the tamper-proof record of transactions, the autonomous contract execution and payment settlement enabled by smart contracts, blockchain is a powerful platform to construct a self-organized spectrum market, to provide the following applications.

*Services Implementation*: A smart contract, which is a self-executing contract built upon a blockchain, can be used in spectrum management, with the clauses in the smart contract being autonomously executed and immutably recorded. Moreover, the payment process can also be autonomously completed by smart contracts. Thus, with the usage of smart contracts, services such as spectrum sensing service [9], the trading of transmission capabilities can be explored to be securely executed between the users in the blockchain network.

*Identity Management*: Besides executing the services with smart contracts, blockchain can also provide an identity management mechanism in the spectrum market. Specifically, a consortium blockchain as an intermediary first collects and records the information from the service seekers, such as SUs, to complete the registration process. The blockchain can then be used to authenticate the registered users and to only allow the registered users to access the data recorded in it. To protect the privacy of the users, the blockchain only provides the pseudonymous identity of the users when the service providers seek for the user identity authentication. Such a configuration is first proposed in [16], where an Identity and Credibility Service (ICS) is built upon a consortium blockchain.

#### *5.3.3 Deployment of Blockchain over Cognitive Radio Networks*

Blockchain, as a distributed ledger, is maintained by all the nodes in the network. However, it can be energy-consuming for a node to maintain the blockchain. For example, in the blockchain using the PoW consensus algorithm, the nodes need to devote computational resources to publish a new block. Thus, the deployment of blockchain network with the communication network should be studied. Here, we outline three ways to deploy the blockchain network to the cognitive radio network and analyze the pros and cons of these ways.

The first way is to directly deploy a blockchain network over a communication network, as shown in Fig. 5.5. Specifically, since the information regarding the spectrum management, which needs to be recorded in the blockchain, is produced or

**Fig. 5.4** Blockchain as a secure database for spectrum management. The information such as spectrum sensing results, spectrum auction results, spectrum access history and the idle spectrum bands information can be securely recorded in blockchain

**Fig. 5.5** Deploy the blockchain network directly on a cognitive radio network

obtained by the nodes in the communication network, i.e., SUs and PUs, it is intuitive for the nodes in the cognitive radio network to also act as nodes in the blockchain network. To deploy the blockchain in this way, the SUs and PUs should be equipped with the mining and other functions in the blockchain. Thus, all the functions of the blockchain, such as the distributed verification of transactions, can be performed by all the users. However, such kind of deployment requires a control channel through which the users can transmit the transactions and blocks. If a wireless control channel is used, there exists the risk that the control channel is jammed by the malicious users. Once the control channel is paralyzed, the blockchain network cannot function.

**Fig. 5.6** The coexistence of a dedicated blockchain network and a cognitive radio network

Another way is to use a *dedicated blockchain network* to help record the relevant information. For users in the cognitive radio network, the limited computational capabilities make it difficult for them to access the spectrum bands and maintain the blockchain at the same time. Specifically, mining, which might consume a lot of energy, is impractical for SUs with constrained battery to implement. To overcome this challenge, one possible way is to allow users to offload the task of recording transactions to a dedicated blockchain network, as shown in Fig. 5.6. In this way, the blockchain functions as an independent database. However, the transaction cannot be verified directly by users and the overhead of transmitting the transactions to the dedicated blockchain network also increases. Moreover, the nodes lose the control over information recorded in the blockchain. To this end, a more practical way for the users is to only offload the mining task, which is energy-consuming, to a cloud/edge computing service provider, and to record the transaction into the blockchain by themselves. Researchers have designed auction mechanisms to allocate computing resources in this case [17]. However, the offloading of mining task might lead to a malicious competitions between the users, which also needs to be considered when a blockchain network is deployed in this way.

Besides the cognitive radio network and the blockchain network, there sometimes exists a third network, e.g., a sensor network, where sensors can be deployed to perform cooperative spectrum sensing to obtain the diversity gain. Under the same principle of dedicated blockchain, the above three networks can coexist and interact with others. Traditionally, the third network such as the sensor network directly communicates with the cognitive radio network. Blockchain network, however, can become an intermediate of the two networks.

#### *5.3.4 Challenges of Applying Blockchain to Spectrum Management*

The application to blockchain in spectrum management is promising. However, there remain a few challenges with respect to, for example, transaction cost, the latency and the privacy leakage. Generally, the challenges can be solved by trading off different characteristics of a blockchain, which is shown in Fig. 5.7. As it can be seen, the decentralization of blockchain is helpful to guarantee the non-repudiation, transparency and immutability, while decreasing the privacy and the scalability, and increasing the latency and transaction cost in the blockchain network. We introduce the challenges and discuss their potential solutions as follows.

*Transaction Cost*: The transaction cost for a node in a blockchain network includes the cost to publish a new block and the communication overhead to transmit the transactions initiated by all the nodes. The consensus algorithm, through which a new block is published, such as PoW, is too computationally intensive to be sustainable for cognitive devices with limited computational resources and battery. Although users can upload the mining task to a cloud/edge computing provider to save their energy, it still costs them to pay for the computing service. Another solution is to adopt or design a more suitable consensus algorithm to lower the cost of maintaining the blockchain. However, an energy-efficient consensus algorithm usually required

**Fig. 5.7** The tradeoffs in different characteristics in a blockchain network

a higher level of trust between nodes in the network to guarantee the security, which limits the flexibility of the blockchain network.

Another cost in maintaining a blockchain is caused by the transmission of transactions. A transaction, initiated by one node in a blockchain network, needs to be broadcast and verified by other nodes before it can be recorded in a new block. After that, the new block is also needed to be broadcast to other nodes for verification and storage. In the case when the generation of transactions is frequent, the overhead of transaction transmission cannot be neglected. On the other hand, the transmission of transactions usually requires a control channel, which is under the risk of being jammed by the malicious nodes. The risk also increases the cost of transaction transmission. To conclude, tradeoff of the cost and the benefits should be considered when applying the blockchain to spectrum management.

*Latency and Synchronization*: The latency in a blockchain network is caused by two phases including the mining process, and updating local blockchain of all the users. Firstly, the mining process, e.g., PoW consensus algorithm, is energyconsuming and time-consuming to guarantee unbiased selection of nodes to publish a new block. Moreover, after a block is published, it still requires some time for the successful miner to broadcast the new block and all other nodes to verify and add the new block to their local replicas of the blockchain. The high latency in the blockchain network might make it unsuitable to the applications with stringent latency requirements. For example, the latency of writing the results of an spectrum auction in the blockchain can delay the execution of spectrum access. This will impair the revenues for the secondary users and increase the latency for transmission. On the other hand, the latency can also lead the fork of a blockchain, i.e., the existence of multiple blockchains in a network. In the fork of a blockchain, multiple auction results such as the recorded winner and its allocated spectrum bands might be different from that of the original blockchain. This will lead to the discrepancy in the spectrum access allocation among the users and even result in interferences between the users when they simultaneously access the same spectrum bands. Although this discrepancy could eventually be solved by the longest chain rule, the effect on secondary users can not be reversible. To conclude, the delayed spectrum access caused by the latency of the blockchain impairs its revenue and further discourages the users from participating in the spectrum auctions.

*Privacy Leakage*: A blockchain guarantees its security by distributing the authority of maintaining the database to all the nodes in the network. As a result, any node can access and verify the data stored in the blockchain. In DSM, the data can be some private information collected by users, which might leak their location or other features. However, the easy and open access to the public blockchain might prevent users from recording any private information in it. Although a private blockchain, in which the access to blockchain is distributed only to permissioned nodes, can be employed to improve the privacy of data recorded in blockchain, this will reduce the decentralization of blockchain and increase the administration cost to supervise the nodes in the blockchain network.

*Scalability*: A newly published block needs to be broadcast to all the other nodes and stored in the local replica of all nodes. Thus, the number of transactions in a block is limited since containing too many transactions might make it be *orphaned*, which occurs when it can not be included to the local blockchain replica of all the nodes on time. On the other hand, in public blockchains, there is usually a pre-defined *block interval*, which can be adjusted by setting the difficulty of block creation. Reducing the block interval can increase the transaction throughput. However, it also increases the risk of creating blockchain forks and thus reduces the security. Thus, the scalability of blockchain, defined as the transaction throughput per second, is limited. To solve this, a more efficient consensus algorithm can be adopted to decrease the block interval, or a private/consortium blockchain can be used instead of a public blockchain, to decrease the number of nodes which need to store a local blockchain copy, and to increase the speed of propagation of a new block to all the nodes.

*Attacks on Blockchain*: The major kinds of attacks on a blockchain are introduced in Sect. 5.2. These attacks can degrade the security or increase the latency of the blockchain, to further affect the dynamic spectrum management which uses the blockchain. Take the selfish mining attacks as an example. In an application which uses a blockchain to record the spectrum auction results, the selfish mining attacks will delay the time for a transaction, i.e., spectrum auction results, to be successfully recorded in the blockchain. Thus, the allocation of spectrum bands cannot be executed on time. The denial of services (DoS) attacks can also be implemented by jamming the control channel, through which the transactions are transmitted.

#### **5.4 Blockchain for Spectrum Management: Examples**

In this section, we give some examples to show how the blockchain technologies can be applied to DSM. Firstly, using the consensus algorithm, researchers have enhanced the performance of traditional spectrum access or developed new spectrum access protocol. Besides, the spectrum auctions secured by a blockchain will also be introduced. Moreover, we introduce a novel cooperative-sensing-based spectrum access protocol, which is also enabled by the blockchain technologies.

#### *5.4.1 Consensus-Based Dynamic Spectrum Access*

A consensus algorithm adopted in the competition of mining in blockchain, such as the PoW algorithm, is used to select one node to create a new block in an unbiased and distributive manner. They can be also used to manage the spectrum access where the coordination of the access requests from SUs is needed to avoid collision. Furthermore, with the use of Distributed Ledger Technology (DLT), the queue derived using a consensus algorithm can be distributively recorded. Overall, using the consensus algorithm, we can either enhance the performance of traditional access protocols or propose new access protocols. Here, we introduce two instances of the consensus-based dynamic spectrum access (DSA).

It is noted that in traditional spectrum auctions, the computational complexity to derive the optimal bid might be high for cognitive devices limited computational capabilities, and the time to derive the optimal bid might occupy the time for transmission. In [18], authors proposed a puzzle-based auction to improve the efficiency of spectrum auctions. The essence of the puzzle-based auction mechanism is to make SUs win the spectrum auction by competing to solve a complex math problem, rather than through traditional bidding. Specifically, in a puzzle-based auction, an auctioneer advertises an access opportunity, and SUs who are interested then respond to the auctioneer to obtain a math problem and will be charged with an entry fee. The involved SUs then start working on solving the math problem. The first SU who submits the correct answer of the problem to the auctioneer will be granted to access the advertised spectrum band. To guarantee the fairness of the auction, the math problem is set to be non-parallelizable, e.g., to find the *n*-th digit of π, meaning that the problem cannot be computed in a parallel manner. By doing so, the SUs cannot devote more parallel computational resources to obtain a greater chance to win the auction, which ensures the fairness and prevents the malicious competition of the spectrum auction. Since the competition of winning one puzzle-based auction is similar to the mining process, the auction can be seen as a centralized consensus algorithm, where the verification of the winner of the auction is performed only by the PU.

In [19], with the use of the DLT, the authors proposed a distributed DSA protocol, so called *consensus-before-talk* in which the access requests of the SUs are stored as transactions and queued with a consensus which is distributively achieved between all the nodes by a pre-defined rule. The system is shown in Fig. 5.8. In such a system, the collision of SUs is avoided by distributively queuing the access requests from different users and the latency of transmission for SUs can thus be reduced. Specifically, an SU first generates an access request as a form of transaction and uses the gossip-of-gossip protocol to spread the transaction. The SU who receives the transaction then verifies the authentication of the request through digital signature and adds its verification time to the transaction, and finally sends the modified transaction to another SU. After all the SUs have verified the transaction, the SUs spread the transaction again. Lastly, each SU has a copy of the transaction with the verification/generation time from all SUs and each SU can calculate the consensus time using the verification and generation time of the transaction. After that, the transactions are added to their local ledger in a order decided by the calculated consensus time. Through these procedures, a consensus, which regulates the queue of spectrum access, is distributively reached among all the SUs.

#### *5.4.2 Secure Spectrum Auctions with Blockchain*

Auction mechanisms have been proven to be an efficient way for dynamic spectrum management. However, the security of the spectrum auctions is mainly guaranteed

**Fig. 5.8** A consensus-based dynamic spectrum access framework

by third-party entities [15]. On the other hand, there usually lacks validation of the transactions in a traditional spectrum auction. Although some centralized validation mechanisms, which are executed by a centralized authority, are proposed, such mechanisms are vulnerable to single point of failures [7]. A distributed validation mechanism, which is accessible to and verifiable by all the users in the network, is thus desired. Blockchain, as a distributed ledger, can be used to overcome the security challenges in the spectrum auctions.

In [7], authors proposed a blockchain-enabled DSA scheme based on the puzzlebased auction. In such a DSA system, SUs are seen as both sensing nodes in the cognitive radio network and mining nodes in the blockchain network. A PU leases its idle spectrum to the SUs through a blockchain without an auctioneer. Leveraging on the blockchain, the spectrum transactions are recorded and verified in immutable and distributed manner by the SUs. The procedures of the spectrum auction system in [7] are introduced below. Firstly, the advertisement of spectrum opportunity is broadcast by an PU through a control channel, and a puzzle-based auction is used to determine the winner of the auction. If the the winner has sufficient tokens to sustain the pre-defined spectrum payment, then the access is granted. Otherwise, the auction is restarted, and the malicious bidder, i.e., the SU who takes part in the auction but has an insufficient budget, will be deprived of the bidding right. After the auction is completed, a new transaction recording the auction result needs to be

**Fig. 5.9** Procedures in a blockchain-secured spectrum auction

recorded in the blockchain. SUs then work to create a new block via mining. The SU which successfully publishes a block will be rewarded with the tokens, named as *specoins*, in the network. With the PoW consensus mechanism, the difficulty of creating a new block increases as the blockchain grows longer. To prevent the creation of a new block from being impossible for SUs which are equipped with limited computational capabilities, the blockchain will be reset at a fixed frequency. The reset can be achieved by creating the first block of a new blockchain, in which the balances of SUs and the PU are recorded.

Although [7] uses the puzzle-based auction to reduce the complexity and thus to improve the efficiency of spectrum auctions, it is straightforward to extend this system to adapt to other auction mechanisms since a blockchain is only used to verify and record the result of a spectrum auction. A more general blockchainsecured spectrum auction system is depicted in Fig. 5.9, where we omit the detailed procedures of spectrum auction.

Enabled by the blockchain technologies, the security of spectrum auctions using a blockchain has been improved with the following features.


#### *5.4.3 Secure Spectrum Sensing Service with Smart Contracts*

In [9], the authors use smart contracts to autonomously execute the spectrum sensing service. It is known that without the cooperation of PUs, the access opportunity can only be obtained by spectrum sensing. However, due to the adverse channel fading effect, the sensing result of a single SU might be incorrect. Cooperative sensing by multiple SUs can be used to improve the sensing performance. However, when an SU does not need to access the spectrum, there lacks incentive for it to participate in the cooperative sensing, which is energy-consuming. To this end, in [9], authors proposed to improve the sensing performance by deploying multiple sensing nodes, so called *helpers* to provide the SUs with the sensing service and to use the smart contract to implement the spectrum sensing service. In this way, an SU can offload its sensing task to the sensing helpers, and the sensing helpers can obtain revenues by charging the SUs. Specifically, the SU firstly broadcasts the smart contract, in which a sensing quality requirement is recorded, to the sensing helpers. Then, the sensing helper checks if it satisfies the requirement and then decides whether to provide the sensing service. After that, smart contract selects the helpers and collects the sensing reports from them. Moreover, the algorithms to detect and remove the malicious sensing helpers, which reports a false or random sensing report, can be also executed autonomously by smart contract. The last procedure in the smart contract is to autonomously pay the service provider. From the above execution process, it is noted that with the use of smart contract, the sensing service can be implemented and supervised in an autonomous and immutable way, and with the use of a permissionless blockchain, an elaborate registration of sensing helpers is eliminated.

#### *5.4.4 Blockchain-Enabled Cooperative Dynamic Spectrum Access*

Cooperative sensing is used to improve the accuracy of spectrum sensing, which gives the SUs a better chance for opportunistic access. To achieve cooperative sensing, a centralized approach is to deploy a fusion centre to collect and fuse the sensing reports from SUs. Moreover, the fusion centre can analyze the collected sensing

#### 5.4 Blockchain for Spectrum Management: Examples 141

**Fig. 5.10** The sequence of operations in a time slot

reports to detect the malicious SUs which report false or random reports for maximizing its own benefits. Although easy to be implemented, this centralized scheme is vulnerable to single point of failures. In particular, the cooperative sensing scheme will break down when the fusion centre is hacked. In this case, the whole secondary network is no longer secure. Moreover, there exits a potential type of attacks in which an attacker emulates an SU to report false sensing results to the fusion centre. Thus, the authentication of SUs should verified [20]. A decentralized and secure cooperative sensing scheme to address the above concerns is thus desired. Furthermore, cooperative sensing also needs an incentive mechanism to encourage the SUs to spend the additional energy for cooperative sensing.

Here, we propose a decentralized cooperative sensing scheme, where the sensing reports of SUs are spread collaboratively by all the SUs, and once an SU collects all the sensing reports, it derives the final result by its local fusion rule. Moreover, the sensing results are securely recorded as a transaction in the blockchain. To achieve this, one SU acts as both a sensing node and a mining node. However, the energy consumed by sensing and mining might prevent SUs from collaboration. To this end, we propose an effective incentive mechanism which guarantees that the efforts of SUs paid to cooperatively sensing and mining are proportional to their chances to access one cooperatively sensed the idle spectrum band. Specifically, the SUs which participate in cooperative sensing and win the mining will be rewarded with tokens in a virtual currency and the SUs can bid for the access opportunity using the tokens they earn. Note that the virtual currency will be supported and secured by the blockchain. In this sense, by fairly allocating the cooperatively obtained access opportunity, SUs are effectively incentivized to participate in cooperative spectrum sensing and mining, and a DSA framework is thus established.

The proposed DSA framework includes a protocol that specifies a time-slotted five-phase operation by the SUs to obtain an access to the spectrum, as shown in Fig. 5.10. Specifically, the SUs first choose whether to sense the primary channel according to its sensing policy (*Phase I*). Then, SUs exchange their sensing results through a control channel (*Phase II*). If the fused sensing result shows that the spectrum is idle, SUs decide the bid for the access according its bidding policy and exchange their bids (*Phase III*). Then, SUs decide whether to work on mining according to its mining policy, and the successful miner will create and broadcast a new block that records the sensing results, bids of SUs and the wining bidder (*Phase IV*). Finally, the winning SU accesses the spectrum to transmit its packets (*Phase V*).

The benefits of the proposed cooperative-spectrum-sensing-based DSA framework are as follows:


According to the DSA framework, three policies are important for the SUs to finally obtain the spectrum access opportunity. For each SU, to maximize its token revenues, it needs to sense and mine in all the time slots. However, it might be a waste of energy for all SUs to always sense and mine. In this sense, we propose a set of heuristic policies for SUs to distributively make the sensing, bidding and mining decisions.

1. *Sensing Policy ai*(*t*): By a sensing policy, denoted as *ai*(*t*), SU*i* determines whether it should sense the primary channel, with *ai*(*t*) = 1 and *ai*(*t*) = 0 representing sensing and not sensing, respectively. We consider a probabilistic sensing policy by which each SU decides whether to sense in *t*-th time slot with a fixed probability *Ps*, i.e.,

$$a\_i(t) = \begin{cases} 1, \text{ with probability } P\_s, \\ 0, \text{ with probability } 1 - P\_s. \end{cases} \tag{5.1}$$

2. *Bidding Policy bi*(*t*): By a bidding policy, denoted as *bi*(*t*), SU *i* determines how many tokens that it should use to bid for the spectrum access. Denote *ni*(*t*) as the balance of SU *i*'s wallet. Then the bid that an SU can place is limited by the maximum number of tokens that it has, i.e., *bi*(*t*) ≤ *ni*(*t*). We consider a bidding policy that is based on its current buffer occupancy ratio and its currently available token values. Mathematically, the bid of SU *i* in *t*-th time slot is determined to be

$$b\_i(t) = \frac{q\_i(t)}{\mathcal{Q}\_i} n\_i(t),\tag{5.2}$$

where *qi*(*t*), *Qi* denote the number of packets in the buffer and the buffer size of SU *i*, respectively. Under this bidding policy, an SU dynamically adapts its bid to its current buffer state, which represents its urgency to access. Thus, when its buffer occupancy ratio is high, it is allowed to submit a high bid to obtain a better chance to access.

3. *Mining Policy ci*(*t*): By a mining policy, denoted as *ci*(*t*), SU *i* determines whether it should work on mining to update the blockchain, with *ci*(*t*) = 1 and *ci*(*t*) = 0 representing mining and not mining, respectively. Similarly with the sensing policy, we consider a probabilistic mining policy according to which each SU randomly decides whether to participates in mining in the *t*-th time slot with a fixed probability *Pm*, i.e.,

$$c\_i(t) = \begin{cases} 1, \text{ with probability } P\_m, \\ 0, \text{ with probability } 1 - P\_m. \end{cases} \tag{5.3}$$

#### **5.5 Future Directions**

The application of the blockchain technologies to dynamic spectrum access is still at its infancy. As mentioned in the preceding sections, there still exist many challenges to be addressed. In this section, we will give some future directions of work so the benefits of the blockchain technology can be better harvested to support more efficient dynamic spectrum access in the future.


and improve the utilization of spectrum resources, it is thus practical to design different blockchain networks to manage the spectrum resources in different areas. However, it is not suitable to use private blockchain network which needs to verify the identity of nodes in the network and assign the permission to the trusted nodes. This is because the mobile users can frequently change its locations and need to be permissioned once they get into a new private blockchain network. Thus, how to adopt or design an efficient blockchain to guarantee the flexibility should be considered in the future researches.


#### **5.6 Summary**

In this chapter, we have investigated the applications of blockchain to dynamic spectrum management. We have first briefly introduced blockchain technologies. We have then given the basic principles which illustrate how and why it is helpful to apply blockchain technologies to dynamic spectrum management, with challenges summarized at last. Moreover, we have introduced some instances of blockchain for dynamic spectrum management. Finally, we have discussed the future directions.

#### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

# **Chapter 6 Artificial Intelligence for Dynamic Spectrum Management**

**Abstract** In the past decade, a significant advancement has been made in artificial intelligence (AI) research from both theoretical and application perspectives. Researchers have also applied AI techniques, particularly machine learning (ML) algorithms, to DSM, the results of which have shown superior performance as compared to traditional ones. In this chapter, we first provide a brief review on ML techniques. Then we introduce recent applications of ML algorithms to enablers of DSM, which include spectrum sensing, signal classification and dynamic spectrum access.

#### **6.1 Introduction**

Artificial intelligence (AI), also known as machine intelligence, has been seen as the key power to drive the development of future information industry [1]. The term AI was coined by John McCarthy in a workshop at Dartmouth College in 1956, and he defined AI as "the science and engineering of making machines, especially intelligent computers" [2]. Generally, AI is defined as the study of the intelligent agent, which is able to judge and execute actions by observing the surrounding environment so as to complete certain tasks. The intelligent agent can be a system or a computer program. With the significant advancement in the computational capability of computer hardware, various theories, especially machine learning techniques, and applications of AI have been developed in the past two decades.

With the surging demand for wireless services and the increasing connections of wireless devices, the network environments are becoming more and more complex and dynamic, which imposes stringent requirements on DSM. In the age of 5G, AI has been seen as an effective tool to support DSM in order to tackle the transmission challenges, such as high rate, massive connections and low latency [3, 4]. By adopting ML techniques, the traditional model-based DSM schemes would be transformed to the data-driven DSM schemes, in which the controller in the network can adjust itself adaptively and intelligently to improve the efficiency and robustness of DSM. AI-based DSM schemes have thus attracted more and more attention in recent years, and have shown great potentials in practical scenarios.

The applications of AI techniques would bring significant benefits to DSM. Firstly, the AI-based DSM schemes normally do not need environmental information as the prior knowledge, and they can extract useful features from the surroundings automatically. Secondly, AI-based DSM schemes can be re-trained periodically and thus they are more robust to the changing environment. Additionally, by applying AI techniques, DSM can be done in a decentralized and distributed manner, leading to a significant reduction of signal overheads, especially for large-scale systems. Finally, once trained, AI-based DSM schemes are low in complexity for processing newly arrived data and thus they are more suitable for practical implementation.

While it is believed that machine learning techniques are effective methods for developing and optimizing the next generation networks [5], there also exist some challenges in applying AI techniques in DSM. For example, different from images, the received signal and its higher order statistics in wireless networks are normally complex numbers, which are hard to process directly by neural networks. Additionally, in a typical wireless communication system, accurate network data such as channel information, is hard to obtain in practice. Hence, there are many remaining challenges and problems to be addressed for achieving wireless intelligence. In this chapter, we first provide a brief review of machine learning techniques, then introduce some applications of these algorithms to DSM, including spectrum sensing, signal classification and dynamic spectrum access.

#### **6.2 Overview of Machine Learning Techniques**

As the core technique of AI, machine learning (ML) is a multidisciplinary subject involving multiple disciplines such as probability theory, statistics, information theory, computational theory, optimization theory, and computer science. T. Mitchell provided a brief definition of machine learning in 1997 as follows: "machine learning is the study of computer algorithms that improve automatically through experience" [6]. Hence, the main objective of ML is to make agents simulate or implement human learning behaviors. For example, with the help of ML algorithms, a machine agent is able to learn from training data to achieve different tasks such as image recognition.

Based on the type of training data used, ML can be divided into two branches, namely, supervised learning and unsupervised learning. The former requires labeled training data, while the latter only uses unlabeled training data.

In supervised learning, the objective for an agent is to learn a parameterized function from the given labeled training dataset and then based on the function learnt to predict the result directly while new data arrives. The common tasks in supervised learning are regression and classification. Specifically, regression is to determine the quantitative relationship between certain variables based on a set of training data, and classification is to find a function to determine the category to which the input data belongs.

In unsupervised learning, since the training data is unlabeled, the agent needs to adopt clustering methods to obtain the relationship. A clustering method aims to divide the training data into several classes based on the similarity of the data. The objective of clustering is to minimize intra-class distance while maximizing interclass distance. Compared to supervised learning, unsupervised learning is more like self-study.

Labeled data can also be generated through online learning such as reinforcement learning (RL). In particular, RL produces labeled experiences to train itself from continuous interactions with the environment, it is developed to solve a *Markov decision process* (MDP) M = {S, A, P,R}, where S is the state space, A is the action space, P is the transition probability space and R is the reward function [7].

ML techniques can also be grouped into two categories, namely, statistical machine learning (SML) and deep learning (DL). Using statistics and optimization theory, SML constructs proper probabilistic and statistical models with training data. DL, on the other hand, makes use of artificial neural network (ANN), also known as deep neural network (DNN), to perform supervised learning tasks. In recent years, neural network techniques have also been applied to RL, leading to the birth of deep reinforcement learning (DRL). In the following, we will provide a brief introduction to SML, DL and DRL.

#### *6.2.1 Statistical Machine Learning*

The objective of SML is to construct a probabilistic and statistical model using the training data, then, based on the constructed model, to make inferences with new data [8]. SML can be applied in both supervised learning and unsupervised learning. The commonly used supervised learning methods with SML are support vector machine (SVM) and K-nearest neighbor (KNN), and the commonly used unsupervised learning methods with SML are K-means and Gaussian mixture model (GMM).

1. **K-Nearest Neighbor**: K-Nearest Neighbor (KNN) algorithm is a basic supervised learning algorithm for classification. Let *T* = {(**x**1, *y*1), (**x**2, *y*2), . . . , (**x***<sup>N</sup>* , *yN* )} denote a given training dataset, where **x***<sup>i</sup>* is the *i*-th data set and *yi* is the corresponding label. Assume that all data sets come from *J* classes. For a newly arrived data set **x**, its label, i.e., class, is determined by its *K* nearest labeled neighbors based on the adopted classification decision rules. Hence, the basic elements in KNN are the number of neighbors *K*, the distance measure and the classification decision rule.

Specifically, the classification process consists of two steps: the first step is to search *K* labeled data sets which are closest to the newly arrived data set **x** according to the given distance measure. Denote the region covering these *K* data sets as *NK* (**x**). The second step is to determine its label *y* by using the chosen classification decision rule based on *NK* (**x**). The commonly used classification decision rule is the majority voting rule, which is given as

150 6 Artificial Intelligence for Dynamic Spectrum Management

$$\text{by } = \arg\max\_{c\_j, j=1,\ldots,J} \sum\_{\mathbf{x}\_l \in N\_K(\mathbf{x})} I\left(\mathbf{y}\_i = c\_j\right) \tag{6.1}$$

where *I*(·) is the indicator function that indicates if a label belongs to class *c <sup>j</sup>* . For example, *I*(*yi* = *c <sup>j</sup>*) = 1 if *yi* = *c <sup>j</sup>* , and *I*(*yi* = *c <sup>j</sup>*) = 0, otherwise.

2. **Support Vector Machine**: Support vector machine (SVM) algorithm is a typical binary classification algorithm. The basic idea of the SVM algorithm is to find a decision hyperplane to maximize the margin between different classes. Specifically, for a given training data set *T* = {(**x**1, *y*1), (**x**2, *y*2), . . . , (**x***<sup>N</sup>* , *yN* )}, where *yi* ∈ {−1, 1}, the objective of the SVM algorithm is to find the hyperplane denoted by **w** · **x** + *b* = 0 to make the data sets linearly separable, where **w** and *b* are the normal vector and the intercept of the plane, respectively. If the decision hyperplane is obtained, the corresponding classification decision function is given as

$$f(\mathbf{x}) = \text{sign}(\mathbf{w} \cdot \mathbf{x} + b) \tag{6.2}$$

The hyperplane can be learnt by solving the following convex quadratic programming problem

$$\min \quad \frac{1}{2} \|\mathbf{w}\|^2 + C \sum\_{i=1}^{N} \xi\_i \tag{6.3}$$

$$\text{s.t. } y\_i \ (\mathbf{w} \cdot \mathbf{x}\_i + b) \ge 1 - \xi\_i, \ i = 1, 2, \dots, N \tag{6.4}$$

$$\mathbf{x}\_i \ge \mathbf{0}, \ i = 1, 2, \dots, N \tag{6.5}$$

where *C* is a punishment parameter and ξ*<sup>i</sup>* is the soft constant for *i*-th data set. Generally, SVM is used to solve a linear classification problem, but it can also be used as a nonlinear classifier by introducing different kernel functions such as Gaussian kernel function and radial basis function.

3. **K-means**: K-means algorithm is a clustering algorithm, in which the unlabeled data sets are processed iteratively to form *K* clusters. Specifically, at the beginning, *K* data sets are chosen to form the initial centroids of the *K* clusters. Then, the Kmeans algorithm alternates the following two steps. The first step is to assign each of the remaining data sets to its nearest cluster. This is determined by evaluating the Euclidian distance between the data set and the centroid of each cluster and choosing the cluster with the smallest distance. The second step is to update the centroid of each cluster, denoted as *ck* , based on the newly labeled data sets. Mathematically, this can be expressed as

$$c\_k = \frac{1}{|N\_k|} \sum\_{\mathbf{x} \in N\_k} \mathbf{x}, \ k = 1, 2, \dots, K \tag{6.6}$$

where *Nk* denote the set of examples assigned to cluster *k*. These two steps will repeat until a termination condition is met.

Although K-means algorithm can be implemented with low complexity, its performance is influenced significantly by the initialization parameters such as the number of clusters and the cluster centroids.

4. **Gaussian Mixture Model**: Gaussian mixture model (GMM) is a widely used model for unsupervised learning. The probability density function (PDF) of the data sets can be expressed as

$$p\left(\mathbf{x}\_{i}\right) = \sum\_{k=1}^{K} \pi\_{k} \, p\left(\mathbf{x}\_{i} \, \middle| \, \mu\_{k}, \, \Sigma\_{k}\right) \tag{6.7}$$

where *K* is the number of Gaussian components, π*<sup>k</sup>* is the mixing coefficient that satisfies *K k*=1 π*<sup>k</sup>* = 1, and *p* **x***i μ<sup>k</sup>* , *k* denotes the PDF of the *k*th Gaussian component with mean *μ<sup>k</sup>* and covariance *<sup>k</sup>* , which can be expressed as

$$p\left(\mathbf{x}\_{i}\,\middle|\,\mu\_{k},\,\Sigma\_{k}\right) = \frac{1}{\pi\,\middle|\,\Sigma\_{k}\,\middle|}\exp\left(-\left(\mathbf{x}\_{i}-\boldsymbol{\mu}\_{k}\right)^{H}\Sigma\_{k}^{-1}\left(\mathbf{x}\_{i}-\boldsymbol{\mu}\_{k}\right)\right) \qquad (6.8)$$

The unknown parameters of the GMM can be denoted as = π*<sup>k</sup>* ,*μ<sup>k</sup>* , *k K <sup>k</sup>*=<sup>1</sup>. The objective of the GMM algorithm is to find the optimal parameters = π*<sup>k</sup>* ,*μ<sup>k</sup>* , *k K <sup>k</sup>*=<sup>1</sup> to maximize the following log-likelihood function

$$\mathcal{L}\left(\Theta\right) = \sum\_{i=1}^{N} \ln \left( \sum\_{k=1}^{K} \pi\_k \cdot p\left(\mathbf{x}\_i \mid \boldsymbol{\mu}\_k, \mathbf{E}\_k\right) \right) \tag{6.9}$$

Since there is no closed-form solution for the above problem, the expectation maximization (EM) algorithm is usually adopted to solve for the optimal parameters = π*<sup>k</sup>* ,*μ<sup>k</sup>* , *k K <sup>k</sup>*=<sup>1</sup> in an iterative manner with properly chosen initial values for them. The EM algorithm is generally composed of two steps in each iteration, namely, the expectation step and the maximization step. Denote γ*ik* as a latent variable which represents the probability that example **x***<sup>i</sup>* belongs to the *k*-th cluster. In the expectation step, the latent variable γ*ik* is updated as: <sup>γ</sup>*ik* <sup>=</sup> <sup>π</sup>*<sup>k</sup> <sup>p</sup>* **x***i μ<sup>k</sup>* , *k K k*=1 π*<sup>k</sup> p* **x***i μ<sup>k</sup>* , *k* , for *<sup>i</sup>* <sup>=</sup> <sup>1</sup>,..., *<sup>N</sup>* and *<sup>k</sup>* <sup>=</sup> <sup>1</sup>,..., *<sup>K</sup>*. In the max-

imization step, the parameter is updated as: <sup>π</sup>*<sup>k</sup>* <sup>=</sup> <sup>1</sup> *N N i*=1 γ*ik* , *μ<sup>k</sup>* = *N i*=1 γ*ik***x***<sup>i</sup> N i*=1 γ*ik* , and

$$\mathbf{E}\_k = \frac{\sum\_{i=1}^{N} \gamma\_k (\mathbf{x}\_i - \boldsymbol{\mu}\_k) (\mathbf{x}\_i - \boldsymbol{\mu}\_k)^H}{\sum\_{i=1}^{N} \gamma\_k}, \text{for } k = 1, \dots, K.$$

#### *6.2.2 Deep Learning*

Deep learning (DL) has significantly advanced the development of computer vision (CV) and natural language processing (NLP) recently. As the core technique of DL, ANN has been used to approximate the relationship between an input and an output. Generally, a typical ANN is composed of three parts, namely, input layer, output layer and hidden layers as shown in Fig. 6.1. In each layer, many cells with different activation functions are placed, and the cells in adjacent layers are connected with each other in a pre-designed manner. With the development of ANNs, there are different network structures used for different types of data. For example, a convolutional neural network (CNN), which consists of convolutional layers, pooling layers and fully connected layers, is suitable for images; while a recurrent neural network (RNN), which contains many recurrent cells in the hidden layers, is suitable for time series data. Furthermore, in order to improve the generalization and convergence performance of the DL, dropout and other techniques are introduced in the design of neural networks [9].

1. **Convolutional Neural Network**: Convolutional neural network (CNN) is a special network for processing images, in which the cells adopt convolution operations. A typical CNN is composed of multiple convolutional layers, pooling layers and fully-connected layers [10].


In order to utilize the extracted feature maps, fully-connected layers are normally used as the last several layers of a CNN. With the help of the special structure, CNNs can process data with clear mesh topology effectively.

2. **Recurrent Neural Network**: Recurrent neural network (RNN) is a powerful tool for time series data, which have shown superior performance on speech recognition [11]. Different from traditional neural networks, there are many connected cells in each layer in an RNN. All cells in the same layer have the same structure and each of them passes its information to its successor. The output of an RNN is determined by not only its current input but also the memory recorded in the past time steps. However, conventional RNNs cannot learn long-term dependent information and suffer from the gradient vanishing problem easily. Then long short-term memory (LSTM) network, as a kind of gated RNN network, is proposed to mitigate this problem. Specifically, in each cell of an LSTM network, there are three gates, namely, the input gate, the forget gate and the output gate, which are given as follows

$$\begin{aligned} \mathbf{i}\_{l} &= \sigma \left( \mathbf{W}\_{i} \mathbf{h}\_{l-1} + \mathbf{U}\_{i} \mathbf{x}\_{l} + b\_{i} \right) \\ \mathbf{f}\_{l} &= \sigma \left( \mathbf{W}\_{f} \mathbf{h}\_{l-1} + \mathbf{U}\_{f} \mathbf{x}\_{l} + b\_{f} \right) \\ \mathbf{o}\_{l} &= \sigma \left( \mathbf{W}\_{o} \mathbf{h}\_{l-1} + \mathbf{U}\_{o} \mathbf{x}\_{l} + b\_{o} \right) \end{aligned} \tag{6.10}$$

where **i***<sup>t</sup>* , **f***<sup>t</sup>* , and **o***<sup>t</sup>* are the input gate, the forget gate and the output gate, respectively; **W***<sup>i</sup>* , **U***<sup>i</sup>* , **b***<sup>i</sup>* , **W***<sup>f</sup>* , **U** *<sup>f</sup>* , **b** *<sup>f</sup>* , *Wo*, *Uo*, *bo* are the weight matrices and biases of the corresponding gate, respectively; and σ (·) is the sigmoid function. Additionally, each cell has a self-loop and its cell state is jointly controlled by the forget gate and the input gate. Specifically, the forget gate determines what information to remove and the input gate determines what information to add to the next cell state. Mathematically, the cell state can be expressed as

$$\mathbf{c}\_{t} = \mathbf{f}\_{t} \cdot \mathbf{c}\_{t-1} + \mathbf{i}\_{t} \cdot \tanh(\mathbf{W}\_{c}\mathbf{h}\_{t-1} + \mathbf{U}\_{c}\mathbf{x}\_{t} + b\_{c}) \tag{6.11}$$

where **W***c*, **U***<sup>c</sup>* and *bc* are the weight matrices and the bias of the cell memory, respectively. The gated structure allows the LSTM network to learn the long-term dependent information while avoiding vanishing gradients.

#### *6.2.3 Deep Reinforcement Learning*

As the combination of DL and RL, deep reinforcement learning (DRL) has shown superior performance in sequential decision-making tasks. In the DRL framework, as shown in Fig. 6.2, the agent inputs its observation (state) *s* (*t*) ∈ S into the neural network and outputs an action *a*(*t*) ∈ A. Then it obtains a rewardR(*s*(*t*), *a*(*t*)) which is used to evaluate the profit of the selected action by executing it. After a period of learning, the agent can learn the optimal strategy, which maps an state to an action, to maximize its long-term accumulative reward from continuous interactions with the environment. Similar to RL, the basic elements of DRL are also state space S, action space A and the reward function R. Different from the traditional RL, which uses a table to indicate the relationship between the state space and the action space, DRL uses a neural network as the function approximator, and therefore it works more effectively for problems with high dimensional state and action spaces. In DRL, the commonly used methods are deep Q-network (DQN), double deep Q-network (DDQN), asynchronous advantage actor-critic (A3C) and deep deterministic policy gradient (DDPG).

1. **Deep Q-network**: Different from the tabular method in traditional RL, a neural network called deep Q-network (DQN) is adopted to approximate the relationship between state space and action space [12]. Since the DQN is optimized by minimizing the temporal difference error, the loss function of DQN is given as

$$L\left(\theta\right) = \mathbb{E}\left[\left(\mathbf{y}^{DQN} - \mathcal{Q}\left(\mathbf{s}, a; \theta\right)\right)^2\right] \tag{6.12}$$

where <sup>E</sup>[·] indicates the expectation operation, *<sup>Q</sup>*(*s*, *<sup>a</sup>*; θ ) is the Q-function with the parameter θ, and the target value *y DQN* is given as

$$\mathbf{y}^{D\!\!QN} = \mathcal{R}\left(\mathbf{s}, a\right) + \mathcal{Y}\mathcal{Q}\left(\mathbf{s}', a'; \theta\right) \tag{6.13}$$

To improve the performance of the basic DQN, two other techniques, i.e., experience replay and quasi-static target network, are introduced in the design of DQN technique.


Additionally, in order to balance the relationship between exploration and exploitation, the -greedy algorithm is usually adopted in a DRL. Specifically, an agent selects the action corresponding to the maximum Q-value of the trained network with a probability 1 − , and selects an action randomly otherwise. After the algorithm converges, the agent just selects the action with the maximum Q-value, and the target network is closed.

2. **Double Deep Q-network**: Since the target value is from the same DQN, the Q-function may be overestimated and trapped in a local optimum, leading to the performance degradation. To improve the performance of DQN, double deep Q-network (DDQN) can be adopted to provide more accurate estimation of the Qfunction [13]. In the DDQN, the target value *y DDQN* can be expressed as follows

$$\mathbf{y}^{DD\,\mathcal{Q}N} = \mathcal{R}\,(\mathbf{s}, a) + \boldsymbol{\chi}\,\mathcal{Q}\left(\mathbf{s}', \underset{a' \in \mathcal{A}}{\arg\max} \,\mathcal{Q}\left(\mathbf{s}', a'; \theta\right); \theta'\right) \tag{6.14}$$

After years of development, ML has become the most concerned discipline in the information age and shown strong effectiveness in applications. As the main force of AI technique, more and more ML algorithms are applied in various fields in order to achieve industrial intelligence.

#### **6.3 Machine Learning for Spectrum Sensing**

Spectrum sensing is an important task to realize DSM in wireless communication systems, and is usually used to assist users to find out the channel status. In order to increase the accuracy of spectrum sensing, many spectrum sensing algorithms have been developed in the past years, such as estimator-correlator (EC) detector, the semiblind energy detector and the blindly combined energy detection (BCED). Although the EC detector can achieve the optimal performance, it needs the knowledge of PU signals and noise level. The semi-blind energy detector is more practical, and it only requires the knowledge of the noise power. However, the performance of the semiblind energy detector depends heavily on the accurate knowledge of noise power, which is usually uncertain. The BCED does not need any prior knowledge about the PU signals or noise, but the performance is worse than the performance of the semiblind energy detector. It is noticed that most existing algorithms are model-driven, and need the prior knowledge of noise or PU signals to achieve good performance. However, this feature makes them unsuitable for practical environment, and the lack of prior knowledge would result in performance degradation.

To solve the above issues, machine learning techniques have been adopted to develop cooperative spectrum sensing (CSS) framework [14]. Specifically, the work considers a CR network, in which multiple SUs share a frequency channel with multiple PUs. The channel is considered to be unavailable for SUs to access if at least one PU is active and it is available if there is no active PU. For cooperative sensing, each SU estimates the energy level of the received signals and reports it to another SU who acts as a fusion center. After the reports of the energy level from all SUs are collected, the fusion center makes the final classification of the channel availability.

Using the machine learning technique such as K-means algorithm, GMM clustering, SVM algorithm and KNN algorithm, the fusion center can construct a classifier to detect the channel availability. With unsupervised machine learning such as Kmeans and GMM clustering, the detection of the channel availability relies on the cluster that the sensing reports from all the SUs are mapped to. On the other hand, with supervised machine learning such as SVM algorithm and KNN algorithm, the classifier is first trained using the labeled sensing reports from all SUs. After the classifier is trained, it can be directly used to derive the channel availability. Compared with traditional CSS techniques, the proposed machine learning framework can bring the following two advantages: (1) it is robust to the changes in the radio environment; (2) it can achieve a better performance in terms of classification accuracy.

#### **6.4 Machine Learning for Signal Classification**

Signal classification, usually performed before signal detection, is a fundamental task in cognitive radio networks. Consider the modulation classification as an example. Traditionally, there are two kinds of modulation classification approaches, namely, the likelihood-based (LB) approach and the feature-based (FB) approach. The LB approach is based on computing the likelihood function of received signals under different modulation schemes hypotheses, and the modulation scheme with the maximum likelihood value is validated. With perfect knowledge of channel and noise parameters, the LB approach can achieve the optimal performance in a Bayesian sense. However, the estimation of these parameters imposes high computation complexity. In the FB approach, useful features such as higher-order statistics are extracted for decision-making. In general, the FB approach has lower computational complexity but it can only achieve sub-optimal performance. Therefore, in order to achieve near optimal performance with low computational complexity, ML techniques have been introduced in solving the modulation classification problem, and have shown superior performance recently.

#### *6.4.1 Modulation-Constrained Clustering Approach*

In [15], a clustering-based LB classifier is proposed for modulation classification in multiple-input and multiple-output (MIMO) communication systems. In that work, a spatial-multiplexed MIMO system with *Nt* transmit antennas and *Nr* receive antennas is considered, in which data symbols are transmitted independently from each transmit antenna. The signal model of the *n*-th received signal vector **y** (*n*) is given as

$$\mathbf{y}\left(n\right) = \mathbf{H}\mathbf{s}\left(n\right) + \mathbf{u}\left(n\right), \quad n = 1, \ldots, N\tag{6.15}$$

where **<sup>H</sup>** <sup>∈</sup> <sup>C</sup>*Nr*×*Nt* is the channel matrix which remains constant within each block of *N* symbols, and **u** (*n*) denotes the AWGN vector.

For LB classifiers, the classification decision is made by selecting the modulation scheme with the maximum likelihood

$$
\hat{M} = \underset{M \in \mathcal{M}}{\text{arg}\max} \,\mathcal{L}\_M \tag{6.16}
$$

where L*<sup>M</sup>* is the likelihood function corresponding to the modulation scheme *M* and M is the set of candidate modulation schemes.

Since the noise at the receiver is Gaussian, the PDF of the received signals follows the GMM given in (6.7) where *K* = *QNt* , the mean and the Covariance matrix of the *k*-th Gaussian component are given as *μ<sup>k</sup>* = **Hs**(*n*) and *<sup>k</sup>* = σ<sup>2</sup>**I**, respectively. The likelihood function for each modulation scheme can be calculated by estimating the parameters of the GMM model using the EM algorithm introduced in Sect. 6.2.1. However, the direct application of the EM algorithm presents the following challenges. Firstly, the modulation order *Q* of a modulation scheme determines the number of Gaussian components as well as the number of parameters to be estimated in the GMM model. Thus, the computational complexity for calculating the likelihood function of a higher-order modulation scheme can be extremely high. Secondly, the initialization of the set of parameters is an important part in the EM algorithm, and it would influence the converged performance and the convergence speed of the algorithm significantly. Hence, in order to improve the performance of the EM algorithm for modulation classification, there is a need to propose an EM algorithm with less parameters to be estimated and a good initialization method.

To reduce the number of parameters, a centroid reconstruction method is proposed in [15] by exploiting the relationship among the constellations. With the help of the proposed centroid reconstruction method, the number of parameters to be estimated is reduced from *QNt* to *Nt* only. This also reduces the number of signal samples needed for the estimation. Specifically, for multiple-input and single-output (MISO) channels, the cluster centroids*μ* = [μ1, μ2,...,μ*<sup>K</sup>* ] can be reconstructed as follows

$$
\boldsymbol{\mu} = \boldsymbol{\Psi} \mathbf{A} \tag{6.17}
$$

where **A** = [**a**1, **a**2,..., **a***Nt*] *<sup>T</sup>* is the reconstructive coefficient matrix, which is a known constant matrix for each modulation scheme, and = [*r*1,...,*rNt*] is the corresponding reconstructive parameter vector.

By introducing constellation-structure-based centroid reconstruction in the EM algorithm, the iteration of *μk K <sup>k</sup>*=<sup>1</sup> can be replaced by the iteration of*r*1,*r*2,...,*rNt* . If we denote = *r*1,*r*2,...,*rNt*, δ<sup>2</sup>**I** as the set of the unknown parameters, the likelihood function is shown as

$$\mathcal{L}\left(\Phi\right) = \sum\_{n=1}^{N} \ln \left( \sum\_{k=1}^{K} \frac{1}{K} p\left(\mathbf{y}\left(n\right) \left| \Phi\right) \right) \tag{6.18}$$

Hence, the proposed EM algorithm for modulation classification is shown as below.


$$\gamma\_{nk} = \frac{p\left(\mathbf{y}\left(n\right)\left|\Phi\right.\right)}{\sum\_{k=1}^{K} \pi\_k \cdot p\left(\mathbf{y}\left(n\right)\left|\Phi\right.\right)}\tag{6.19}$$

and then the reconstructive parameters

$$r\_1 = \frac{\sum\_{n=1}^{N} \sum\_{k=1}^{K} \chi\_{nk} a\_{1,k} \left(\mathbf{y}\left(n\right) - a\_{2,k} r\_2 - \dots - a\_{N,k} r\_{N\_t}\right)}{\sum\_{n=1}^{N} \sum\_{k=1}^{K} \chi\_{nk} \left(a\_{1,k}\right)^2} \tag{6.20}$$

$$r\_m = \frac{\sum\_{n=1}^{N} \sum\_{k=1}^{K} \gamma\_{nk} a\_{m,k} \left( \mathbf{y} \left( n \right) - a\_{1,k} r\_1 - \dots - a\_{N\_t,k} r\_{N\_t} \right)}{\sum\_{n=1}^{N} \sum\_{k=1}^{K} \gamma\_{nk} \left( a\_{m,k} \right)^2} \tag{6.21}$$

where *m* = 2,..., *Nt* .

3. **M-step**: The cluster centroids *μ* and the noise variance σ are updated iteratively as below

$$\mu\_k = \mathbf{a}\_{1,k}r\_1 + \mathbf{a}\_{2,k}r\_2 + \dots + \mathbf{a}\_{N\_t,k}r\_{N\_t}, k = 1, \dots, K \tag{6.22}$$

and

$$\sigma^2 = \frac{\sum\_{n=1}^{N} \sum\_{k=1}^{K} \chi\_{nk} \left(\mathbf{y}(n) - \boldsymbol{\mu}\_k\right) \left(\mathbf{y}(n) - \boldsymbol{\mu}\_k\right)^H}{\sum\_{n=1}^{N} \sum\_{k=1}^{K} \chi\_{nk}} \tag{6.23}$$

where *k* = 1,..., *K*.

4. **Classification Decision**: Repeat Step 2 and Step 3 iteratively until the likelihood function is converged. Then make the classification decision according to the criterion defined in (6.16).

Simulation results in [15] show that the proposed algorithm performs well with short observation length in terms of classification accuracy. Additionally, the performance achieved by the proposed algorithm is close to that of the average likelihood ratio-test upper bound (ALRT-UB), which can be seen as the performance upper bound of any modulation classification algorithm.

#### *6.4.2 Deep Learning Approach*

The modulation classifier in Sect. 6.4.1 requires accurate knowledge of the channel model. In addition, the channel model may not be available in practice. As a powerful supervised learning framework, DL can also be applied in modulation classification. In [16], a low-complexity blind data-driven modulation classifier based on DNN is proposed, which operates under uncertain noise condition modeled by a mixture of white Gaussian noise, white non-Gaussian noise and time-correlated non-Gaussian noise.

In [16], a single-input and single-output (SISO) channel is considered, and the *n*-th received signal sample is given as

$$\text{If } r(n) = h\mathbf{s}\left(n\right) + \boldsymbol{\mu}\left(n\right), n = 1, 2, \dots, N \tag{6.24}$$

where *s* (*n*) is the transmitted symbol from an unknown modulation scheme *Mi* , *N* is the number of symbols in a block, *h* is the channel coefficient and *u* (*n*) denotes the additive noise.

Denote the set of the candidate modulation schemes and the received signal sequence by M = {*Mi*, *i* = 1, 2,..., *L*} and **r** = [*r*(1),*r*(2), ...,*r*(*N*)], respectively. Let *P*(*Mi*|**r**) denote the *a posterior* probability of the modulation scheme *Mi* given the received signal **r**. The objective of the work is to find the modulation scheme which maximizes the *a posterior* probability. This is known as the maximum *a posterior* (MAP) criterion.

$$
\hat{M}\_i = \underset{M\_i \in \mathcal{M}}{\text{arg}\max} \, P(M\_i|\mathbf{r}) \tag{6.25}
$$

In order to accurately make classification decisions with low complexity, the DNN is adopted to learn the *a posterior* probability *P*(*Mi*|**r**), *i* = 1,..., *L*. The DNN is used as an approximation function *f* mapping the received signal to the *a posterior* probability. The in-phase and quadrature (IQ) components of the received signal samples are chosen as the inputs to the proposed neural network. Motivated by its superior performance for processing time-dependent data, the long shortterm memory (LSTM) network is introduced in the design of the proposed neural network. There are three main reasons that the LSTM network is suitable for solving a modulation classification problem.


Additionally, in order to summarize the output form the LSTM network, a temporal attention mechanism is adopted in the final LSTM layer over the outputs from all time steps. In the temporal attention mechanism, each output has a different weight, which indicates the importance of each to the modulation classification results.

Specifically, the proposed seven layer-neural network is composed of three stacked-LSTM layers and four fully-connected layers. In the training phase, the one-hot coding vectors of true modulation schemes of the input signal samples are used as the labels. The Adaptive Moment Estimation (Adam) optimizer is used to minimize the loss function to optimize the weights and bias in the network. After the training phase, the modulation classification is made by according to the MAP criterion defined in (6.25). The simulation results show that the classification accuracy of the proposed classifier approaches that of the ML classifier with all the channel and noise parameters known. Moreover, under uncertain noise conditions, with lower computational online complexity, the proposed classifier can achieve a better performance than the EM and ECM classifiers.

#### **6.5 Deep Reinforcement Learning for Dynamic Spectrum Access**

In traditional DSA mechanism, there exists a centralized control node responsible for allocating the spectrum resources to users. Before making the access decisions, the centralized node needs to collect the global network information, such as the position information of users and base stations as well as the channel state information. However, such global network information is difficult to obtain in practice, as it imposes significant signal overheads on the system especially when there is a large number of users. Additionally, the collected information may be outdated in a highly dynamic network environment, resulting in invalid access strategy and poor performance. To solve the above issues, intelligent DSA framework operating with local network information is desirable. Recently, researchers introduced DRL techniques for DSA, showing superior performance on sequential decision-making tasks, to enable more flexible and intelligent DSA mechanism [17]. Since agents in DRL can make full use of the representation ability of neural networks, the decision space can be high-dimensional and continuous, which can guarantee the performance of the DSA mechanisms for large-scale networks.

In the following sections, we will introduce several typical applications on the use of DRL techniques for DSA.

#### *6.5.1 Deep Multi-user Reinforcement Learning for Distributed Dynamic Spectrum Access*

In [18], a DRL-based DSA framework is proposed to manage dynamic spectrum access in multichannel wireless networks, in which each user acts as an agent to make channel access decisions intelligently and independently to maximize its longterm transmission rate.

In this work, a wireless network composed of *N* users and *K* shared orthogonal channels is considered. Denote the set of users and the set of channels as N = {1, 2,..., *N*} andK = {1, 2,..., *K*}, respectively. It is assumed that each user needs to choose a single channel for transmission in each time slot, and it always has packets to transmit. Additionally, the transmission is successful if there is only one user accessing the channel, and the transmission fails otherwise. After each transmission, each user can receive a binary observation *on*(*t*) to indicate whether its transmission is successful or not, i.e., *on*(*t*) = 1 if the transmission is successful and *on*(*t*) = 0 otherwise.

With the assumption that users don't have message exchange in each time slot, they can only make access decisions by their local observations. In order solve the above problem, a DRL-based distributed framework for DSA is proposed, in which each user acts as an agent and constructs a DQN. The action space, state space and reward function are described as follows.

1. **Action Space**: In each time slot, each user needs to choose whether to transmit or not. If the user chooses to transmit, it needs to select a channel for transmission. The action of user *n* in time slot *t* is given as

$$a\_n(t) \in \{0, 1, \ldots, K\} \tag{6.26}$$

where *an*(*t*) = 0 indicates that user *n* chooses not to transmit in time slot *t*.

2. **State Space**: The state of each user is composed of its action and observation up to time slot *t*, which is given as

$$\mathcal{H}\_n(t) = \left( \{ a\_n(i) \} \_{i=1}^{t-1}, \{ o\_n(i) \} \_{i=1}^{t-1} \right) \tag{6.27}$$

3. **Reward Function**: Since the objective is to maximize the long-term rate, the function of achievable rate is chosen as the reward function

$$r\_n(t) = B \log\_2(1 + SNR\_n(k))\tag{6.28}$$

where *B* is the channel bandwidth and *SNRn*(*k*) is SNR of user *n* on channel *k*.

In the DRL-based framework proposed in [18], in order to capture features from observations, the LSTM network is introduced in the structure of the adopted DQN. Additionally, the DDQN method is also adopted to improve the performance of the DQN. In the training phase, each user trains the parameters of their respective DQN cooperatively by communicating with a central unit. After updating the parameters, each user uses the trained DQN to make access decisions autonomously and independently. After the DQN is well-trained, the central unit is closed, and users use the converged DQN to obtain efficient access policy directly.

#### *6.5.2 Deep Reinforcement Learning for Joint User Association and Resource Allocation*

In heterogeneous networks (HetNets), all the base stations (BSs) normally provide services to users on shared spectrum bands in order to improve the spectrum efficiency. However, most existing methods need accurate global network information, e.g., channel state information, as the prior knowledge, which is difficult to obtain in practice.

In [19], a distributed DRL-based DSA framework is proposed for user association and resource allocation in the downlink HetNets. Specifically, a three-tier heterogeneous network is considered, which consists of *Nm* macrocell base stations (MBSs), *Np* pico base stations (PBSs), *N <sup>f</sup>* femto base stations (FBSs) and *N* user equipments (UEs). The sets of UEs and BSs are denoted, respectively, by N = {1,..., *N*} and B = {0, 1,..., *L* − 1}, where *L* = *Nm* + *Np* + *N <sup>f</sup>* . All the BSs share the same *K* orthogonal channels for downlink transmission, and the set of channels can be denoted as K = {1,..., *K*}.

For each UE *i*, denote *b<sup>l</sup> <sup>i</sup>* (*t*) = *b*0 *<sup>i</sup>* (*t*),... *bL*−<sup>1</sup> *<sup>i</sup>* (*t*) , *i* ∈ N, *l* ∈ B as the binary *user-association* vector, where *b<sup>l</sup> <sup>i</sup>*(*t*) = 1 if UE *i* is associated with the *B Sl* at time *t* and *b<sup>l</sup> <sup>i</sup>*(*t*) = 0 otherwise. For each BS, a binary *channel-allocation* vector is defined as *c<sup>k</sup> <sup>i</sup>* (*t*) = *c*1 *<sup>i</sup>* (*t*),... *c<sup>K</sup> <sup>i</sup>* (*t*) , *<sup>i</sup>* <sup>∈</sup> <sup>N</sup>, *<sup>k</sup>* <sup>∈</sup> <sup>K</sup>, where *<sup>c</sup><sup>l</sup> <sup>i</sup>*(*t*) = 1 if UE *i* uses channel resource *Ck* at time *t* and *c<sup>l</sup> <sup>i</sup>*(*t*) = 0 otherwise. It is assumed that each UE can only be connected to one BS and each channel can only be allocated to one UE for each BS in each time slot *t*.

The transmission power between UE *i* and its associated BS *l* on channel *Ck* at time *t* can be denoted as *p<sup>k</sup> li* (*t*) = *p*1 *li* (*t*),..., *p<sup>K</sup> li* (*t*) ,*l* ∈ B,*i* ∈ N, *k* ∈ K. Since all the BSs share the common spectrum resource, the co-channel interference should be considered. Hence, the signal-to-interference-plus-noise-ratio (SINR) of UE *i* associated with BS *l* and allocated with channel *Ck* is given as

$$\Gamma\_{li}^{k}\left(t\right) = \frac{b\_{i}^{l}\left(t\right)h\_{l}^{i,k}\left(t\right)c\_{i}^{k}\left(t\right)p\_{li}^{k}\left(t\right)}{\sum\_{j\in\mathcal{B}\backslash\{l\}}b\_{i}^{j}\left(t\right)h\_{j}^{i,k}\left(t\right)c\_{i}^{k}\left(t\right)p\_{ji}^{k}\left(t\right) + WN\_{0}}\tag{6.29}$$

where *hi*,*<sup>k</sup> <sup>l</sup>* (*t*) is the channel gain between the UE *i* and BS *l* at time *t*, *W* is the bandwidth of each channel and *N*<sup>0</sup> is the noise spectral power. Therefore, the total achievable transmission rate of UE *i* at time *t* can be expressed as

$$r\_i\left(t\right) = \sum\_{l=0}^{L-1} b\_i^l\left(t\right) \sum\_{k=1}^{K} W \log\_2\left(1 + \Gamma\_{li}^k\left(t\right)\right) \tag{6.30}$$

Considering that the operation cost of the UE *i* from BS *l* is determined by the transmit power *p<sup>k</sup> li* (*t*), the total operation cost of UE *i* is given as

$$\varphi\_i(t) = \sum\_{l=0}^{L-1} \varphi\_i^l(t) = \sum\_{l=0}^{L-1} \lambda\_l b\_i^l(t) \sum\_{k=1}^{K} c\_i^k(t) \, p\_{li}^k(t) \tag{6.31}$$

where λ*<sup>l</sup>* is the price per unit of transmit power from BS *l*. Then we define the utility function of UE *i* as the total achievable profit minus the operation cost, which is denoted as

$$
\rho\_{\dot{i}}\left(t\right) = \rho\_{\dot{i}}\left(t\right)r\_{\dot{i}}\left(t\right) - \varphi\_{\dot{i}}\left(t\right) \tag{6.32}
$$

where ρ*<sup>i</sup>* > 0 is the profit per unit transmission rate.

164 6 Artificial Intelligence for Dynamic Spectrum Management

In this work, the objective of each UE is to maximize its own long-term utility. Since the problem is an integer programming problem and the objective of the problem is long-term, it is difficult to adopt the traditional optimization algorithms such as convex optimization to solve it. Additionally, the dimension of the decision space increases exponentially. Hence, a distributed DRL-based multi-agent framework for user association and resource allocation is proposed to maximize the long-term utility.

The state space, action space and reward function for modeling such a problem are given as follows.

1. **State Space**: In each time slot, the state is composed of the QoS of all the UEs, and we have

$$\mathbf{s}\begin{pmatrix} t \end{pmatrix} = \begin{Bmatrix} \mathbf{s}\_1\ \mathbf{t} \end{Bmatrix}, \mathbf{s}\_2\ \begin{pmatrix} t \end{pmatrix}, \dots, \mathbf{s}\_N\ \begin{pmatrix} t \end{pmatrix} \tag{6.33}$$

where *si* (*t*) is a binary index indicating that whether UE *i*'s QoS is larger than the minimum threshold *<sup>i</sup>* or not, i.e., if the UE *i*'s QoS is larger than *<sup>i</sup>* , *si* (*t*) = 1 and otherwise, *si* (*t*) = 0.

2. **Action Space**: In each time *t*, each UE needs to choose a BS and a channel to access. Hence, the action of UE *i* consists of two parts, i,e, the user-association vector and the resource-allocation vector

$$a\_{li}^{k}\left(t\right) = \left\{b\_{i}^{l}\left(t\right), c\_{i}^{k}\left(t\right)\right\} \tag{6.34}$$

where *b<sup>l</sup> <sup>i</sup>* (*t*) <sup>∈</sup> {0, <sup>1</sup>} and *<sup>c</sup><sup>k</sup> <sup>i</sup>* (*t*) ∈ {0, 1}.

3. **Reward Function**: The reward function of UE *i* is mainly determined by its achievable rate in time slot *t*. Besides, to improve the convergence performance of the algorithm, the action-selection cost is also considered in the design of the reward function.

$$R\_i\left(t\right) = \begin{cases} \omega\_i\left(t\right), & \Gamma\_i\left(t\right) \ge \Omega\_i\\ -\Psi\_i, & \text{otherwise} \end{cases} \tag{6.35}$$

where *<sup>i</sup>* <sup>=</sup> *<sup>L</sup>* −1 *l*=0 *K k*=1 *li <sup>k</sup>* is SINR of UE *i*, *<sup>i</sup>* is a pre-designed minimum QoS requirement and *<sup>i</sup>* is the action-selection cost, which is a positive value.

In the proposed framework, each UE is equipped with a DQN to make access decisions independently. In the initialization stage, each UE is first connected to the BS which resulted in the maximum received signal reference power (RSRP) and constructs a DQN, in which the parameter is initialized randomly. At each training time *t*, each UE has a common state *s* and selects an action, namely, access request, according to its Q-value *Qi* (*s*, *ai*, θ) obtained from its DDQN. The access request contains the indices of the required BS and the channel. Then, if the BS accepts the request, the BS would send a feedback signal to the UE, which indicates the resource is available, and otherwise, the BS would not reply. After connecting to the chosen BS and accessing the chosen channel, the UE obtains an immediate reward *ui* (*s*, *ai*) and a new state *s* , then stores the current experience *s*, *ai*, *ui* (*s*, *ai*),*s*  into its replay memory D. Finally, each UE updates the parameter θ of its DDQN by using stochastic gradient descent (SGD) algorithms based on the random samples from the memory D.

#### **6.6 Summary**

In this chapter, we have provided a brief review on machine learning techniques, and have described some applications on AI-based DSM mechanisms such as spectrum sensing, signal classification and dynamic spectrum access. These AI-based DSM mechanism have been shown to achieve better performance and robustness than conventional schemes. Additionally, they can also provide more efficient and flexible ways to implement the DSM. In the future, the combination of AI techniques and the DSM mechanisms would become a novel and promising research direction.

#### **References**


**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.